Home  >  Article  >  Technology peripherals  >  ChatGPT is a double-edged sword in the field of network security

ChatGPT is a double-edged sword in the field of network security

王林
王林forward
2023-04-07 14:57:071659browse

ChatGPT is an AI-driven prototype chatbot designed to help a wide range of use cases, including code development and debugging. One of its main attractions is the ability for users to interact with the chatbot in a conversational manner and get help with everything from writing software to understanding complex topics, writing papers and emails, improving customer service, and testing different business or market scenarios. But it can also be used for darker purposes.

Since OpenAI released ChatGPT, many security experts have predicted that it is only a matter of time before cybercriminals start using AI chatbots to write malware and perform other malicious activities. Like all new technologies, given enough time and incentive, someone will find a way to take advantage of it. Just a few weeks later, it seemed that time had arrived. Cybercriminals have begun using OpenAI’s artificial intelligence chatbot ChatGPT to quickly build hacking tools. Scammers are also testing ChatGPT's ability to build other chatbots designed to lure targets by posing as young women. In fact, researchers at Check Point Research (CPR) report that there are at least three black hat hackers in the underground. The forum demonstrated how they used ChatGPT’s artificial intelligence intelligence to conduct malicious attacks.

In one documented example, Israeli security firm Check Point discovered a post on a popular underground hacking forum by a hacker who said he was experimenting with using a popular artificial intelligence chatbot to "Re-create the malware".

ChatGPT 在网络安全领域是把双刃剑ChatGPT allows users to ask simple questions or suggestions, such as writing to track emails from hosting providers

Hackers will generate Android malware from ChatGPT Compressed and spread over the network. The malware is reportedly capable of stealing files of interest. Another hacker demonstrated another tool that was able to install a backdoor on a computer and potentially infect it with more malware.

Check Point noted in its assessment of the situation that some hackers were using ChatGPT to create their first scripts. In the forum mentioned above, another user shared a piece of Python code written using ChatGPT that can encrypt files on the victim's computer. While the code can be used for harmless reasons, Check Point states that "ChatGPT generates code that can be easily modified to fully encrypt files on a victim's computer without any user interaction." In addition, a hacker posted on an underground forum that he used ChatGPT to create a piece of code that uses a third-party API to retrieve the latest cryptocurrency prices, which are used in darknet market payment systems.

The security firm emphasized that while the ChatGPT-encoded hacking tool appears to be "very basic," "it's only a matter of time before more sophisticated threat actors enhance the way they use AI-based tools." Rik Ferguson, vice president of security intelligence at U.S. cybersecurity firm Forescout, said ChatGPT does not yet appear to be capable of writing something as sophisticated as the major ransomware seen in major hacking incidents in recent years, such as Conti, which was used to breach the Irish National Health Service. system is notorious. However, OpenAI's tools will lower the barrier to entry for newcomers to the illegal market by building more basic but equally effective malware, he said.

Alex Holden, founder of cyber intelligence company Hold Security, also said he has seen dating scammers also start using ChatGPT as cybercriminals try to create convincing Role. “They are planning to create chatbots to impersonate mostly girls trying to automate small talk for use in online scams.”

The developers of ChatGPT have implemented some malicious request filtering that can prevent AI from building obvious requests for spyware . However, the AI ​​chat box came under more scrutiny after security analysts discovered that using ChatGPT it was possible to write grammatically correct phishing emails without typos.

From writing malware to creating darknet marketsIn one instance, the malware author was in a forum used by other cybercriminals Revealed how he experimented with ChatGPT to see if he could reproduce known malware and techniques.

An example of an attacker's success is this individual who shared the code for a Python-based information stealer he developed using ChatGPT that can search, copy, and exfiltrate 12 common file types, such as from infected systems Office documents, PDFs and images. The same malware author also showed how he used ChatGPT to write Java code to download the PuTTY SSH and telnet client and secretly run it on the system via PowerShell.

Another threat actor published a Python script he generated using a chatbot to encrypt and decrypt data using the Blowfish and Twofish encryption algorithms. Security researchers discovered that while the code could be used for entirely benign purposes, threat actors could easily tweak it to run on a system without any user interaction - turning it into ransomware in the process. Unlike the author of the information stealer, some of the attackers appear to have very limited technical skills, actually claiming that the Python script he generated using ChatGPT was the first script he had ever created.

In a third instance, security researchers discovered that some cybercriminals were discussing how he used ChatGPT to create a fully automated darknet market for trading stolen bank account and payment card data, malicious Software tools, drugs, ammunition and various other illegal goods.

Zero threshold to generate malware

ChatGPT 在网络安全领域是把双刃剑

Since OpenAI released AI tools, the threat behavior Concerns about attackers abusing ChatGPT have been widespread, and many security researchers believe chatbots have significantly lowered the barrier to writing malware.

Check Point’s Threat Intelligence Group Manager Sergey Shykevich reiterated that with ChatGPT, malicious actors need no coding experience to write malware: “You should know what functionality the malware or any program should have. ChatGTP will do it for you Code is written to perform the desired function. So the short-term concern is certainly that ChatGPT allows low-skilled cybercriminals to develop malware," Shykevich said. "Longer term, I think more sophisticated cybercriminals will also adopt ChatGPT to make their campaigns more efficient or address different gaps they may have."

"From an attacker's perspective, the ability of AI systems to generate code allows malicious actors to easily bridge any skills gap they may encounter by acting as a translator between languages." Horizon3AI Customer Success added manager Brad Hong. These tools provide a way to create code templates on demand that are relevant to attackers' goals and reduce their need to search developer sites like Stack Overflow and Git.

Even before threat actors were discovered abusing ChatGPT, Check Point, like a number of other security vendors, demonstrated how adversaries were leveraging chatbots in malicious campaigns. In a blog post, the security vendor described how its researchers were able to create a perfectly legitimate-sounding phishing email simply by asking ChatGPT to write an email that appeared to come from a fictitious web hosting service. The researchers also demonstrated how they let ChatGPT write VBS code that they could paste into an Excel workbook to download an executable file from a remote URL.

The purpose of this test is to demonstrate how an attacker can abuse an artificial intelligence model such as ChatGPT to create a complete infection chain, from the initial spear phishing email to running a reverse shell on the affected system.

As things stand, ChatGPT cannot replace skilled threat actors—at least not yet. But security researchers say there is a lot of evidence that ChatGPT does help low-skilled hackers create malware, which will continue to raise public concerns about cybercriminals abusing the technology.

Bypassing ChatGPT’s Restrictions

Initially, some security researchers thought the restrictions in the ChatGPT user interface were weak and discovered threatening behavior can easily bypass obstacles. Since then, Shykevich said OpenAI has been working to improve the chatbot's limitations.

"We are seeing restrictions on the ChatGPT user interface become much higher each week. As a result, it is now more difficult to use ChatGPT for malicious or abusive activity," he said.

But cybercriminals can still abuse the program by using and avoiding certain words or phrases that allow users to bypass restrictions. Matt Lewis, director of commercial research at NCC Group, calls interacting with online models an "art form" involving computing.

"If you avoid using the word malware and just ask it to show you an example of code that encrypts a file, based on how malware is designed, that's what it's going to do," Lewis said. "It has a way of liking being directed, and there are some interesting ways to make it do what you want it to do in many different ways."

In a presentation on a related topic, Lewis Demonstrated how ChatGPT would "write an encryption script" that, while not reaching full ransomware, could still be dangerous. "It's going to be a hard problem to solve," Lewis said of bypassing it, adding that regulatory language for context and intent would be very difficult for OpenAI.

To further complicate matters, Check Point researchers observed threat actors using a Telegram bot with a GPT-3 model API, called text-davinci-003, instead of ChatGPT, in order to override chatbot restrictions .

ChatGPT is just the user interface for the OpenAI model. Developers can use these models to integrate back-end models with their own applications. Users consume these models through an unrestricted protected API.

"From what we've seen, the barriers and limitations OpenAI has put in place on the ChatGPT interface don't apply to those using these models through the API," Shykevich said.

Threat actors can also evade restrictions by precisely prompting chatbots. CyberArk has tested ChatGPT since its launch and discovered blind spots in its limitations. With repeated persistence and request, it will deliver the desired coding product. CyberArk researchers also report that by continuously querying ChatGPT and rendering a new piece of code each time, users can create polymorphic malware that is highly evasive to detection.

Polymorphic viruses can be very dangerous. There are already online tools and frameworks that can generate such viruses. ChatGPT's ability to create code is most beneficial to unskilled coders and script kiddies.

This is not a new capability as far as attackers are concerned...nor is it a particularly effective way to generate malware variants, and better tools already exist. ChatGPT may be a new tool, as it allows less skilled attackers to generate potentially dangerous code.

Making it harder for cybercriminals

ChatGPT 在网络安全领域是把双刃剑

The developers of OpenAI and other similar tools have installed filters and controls and continually improve them in an attempt to limit misuse of its technology. For now at least, AI tools remain glitchy and prone to what many researchers describe as outright errors, which could thwart some malicious efforts. Even so, many predict that the potential for misuse of these technologies will remain high in the long term.

To make it harder for criminals to abuse these technologies, developers need to train and improve their artificial intelligence engines to identify requests that could be used in malicious ways, Shykevich said. Another option, he said, is to implement authentication and authorization requirements to use the OpenAI engine. He noted that even something similar to what online financial institutions and payment systems currently use would be enough.

As for preventing criminal use of ChatGPT, Shykevich also said that ultimately, "unfortunately, enforcement has to be through regulation." OpenAI has implemented controls that prevent ChatGPT from building spyware with policy violation warnings obvious request, although hackers and journalists have found ways to bypass these protections. Shykevich also said companies like OpenAI may have to be legally forced to train their AI to detect such abuse.


This article is translated from: https://www.techtarget.com/searchsecurity/news/365531559/How-hackers-can-abuse-ChatGPT-to-create-malware

The above is the detailed content of ChatGPT is a double-edged sword in the field of network security. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete