Home  >  Article  >  Technology peripherals  >  What is the development trend of artificial intelligence chatbots in the field of cybersecurity?

What is the development trend of artificial intelligence chatbots in the field of cybersecurity?

WBOY
WBOYforward
2023-04-22 23:13:071591browse

The chatbot ChatGPT launched by OpenAI has many good uses, but like any new technology, some people will use ChatGPT for nefarious purposes.

What is the development trend of artificial intelligence chatbots in the field of cybersecurity?

From relatively simple tasks like writing emails to more complex jobs like writing papers or compiling code, ChatGPT, OpenAI’s artificial intelligence-driven natural language processing tool, has been It has aroused great interest since its launch.

Of course, ChatGPT isn’t perfect—it’s been known to make mistakes when it misinterprets the information it’s learning from—but many view it and other AI tools as the future of the internet.

OpenAI has added an entry to ChatGPT’s terms of service that prohibits the generation of malware, including ransomware, keyloggers, viruses or other software designed to inflict some degree of harm, as well as attempts to create spam email, and use cases against cybercrime.

But, like any innovative technology, there are already people trying to use ChatGPT for nefarious purposes.

Shortly after ChatGPT was released, cybercriminals posted on underground forums discussing how ChatGPT could be used to help conduct malicious cyber activities, such as writing phishing emails or helping compile malware.

There are concerns that cybercriminals will try to use ChatGPT and other artificial intelligence tools (such as Google Bard) as part of their efforts. While these AI tools won’t revolutionize cyberattacks, they can still help cybercriminals carry out malicious campaigns more effectively.

Sergey Shykevich, threat intelligence manager at cybersecurity service provider CheckPoint, said, “At least in the short term, I don’t think ChatGPT will create a whole new type of cyberattack. The focus will be on increasing the cost of their daily operations. Benefits."

Phishing attacks are the most common component of malicious hacking and fraud campaigns. Whether cyberattackers use email to spread malware, phishing links, or are used to convince victims to transfer money, email is initially a key tool used for coercion.

The reliance on email means criminal gangs need a constant stream of clear and available content. In many cases, especially phishing, the cyber attacker's goal is to convince people to do something, such as transfer money. Fortunately, many of these phishing attempts are now easily identifiable as spam, but effective automated copywriting can make these emails more noticeable.

Cybercrime is a global industry and cybercriminals are sending phishing emails to potential targets around the world, which means language can be a barrier, especially for those with more sophisticated spear-phishing attacks. In the case of phishing campaigns, these campaigns rely on victims believing they are talking to a trustworthy contact - if the email is filled with unusual spelling and grammatical errors or odd punctuation, people are less likely to believe they are Talk to a friend or colleague.

But if artificial intelligence is harnessed correctly, chatbots can write text for emails in any language a cyberattacker wants.

Shykevich said: "For example, the biggest obstacle for Russian cybercriminals is language - English. They now hire college graduates with English majors to write articles for phishing emails or work in call centers, and they have to This pays a fee. Tools like ChatGPT can save them a lot of money on creating all kinds of different phishing messages, and I think that's the approach they're going to pursue."

In theory, there is some protection Measures are taken to prevent abuse, for example, ChatGPT requires users to register an email address and also requires a phone number to verify registration.

While ChatGPT can refuse to write phishing emails, it can be asked to create email templates for other messages that are commonly exploited by cyberattackers. Such attempts may include messages such as announcing an annual bonus, that important software updates must be downloaded and installed, or that attachments need to be viewed as a matter of urgency.

Adam Meyers, senior vice president at cybersecurity and threat intelligence provider Crowdstrike, said: "Crafting an email that convinces someone to click on a link to get something like a meeting invitation, but if the native language is not If you don't speak English, you may not be able to do it. You can ask ChatGPT to create a beautifully formatted and grammatically correct invitation letter. If you are not a native English speaker, you may not be able to do this."

abuse These tools are not limited to email, criminals can use it to help script any text-based online platform. This can be a useful tool for cyber attackers carrying out scams, or even advanced cyber threat groups trying to conduct espionage - especially for creating fake social profiles to attract people.

Kelly Shortridge, a cybersecurity expert and senior principal product technologist at cloud computing provider Fastly, said: "If you want to build a credible business, you can post sound business statements on LinkedIn to make you look like you are doing business." Real business people that customers are trying to connect with, then ChatGPT is a perfect fit.” Creating an online profile and filling it with posts and information is a time-consuming process.

Shortridge believes that cyberattackers can use artificial intelligence tools such as ChatGPT to write convincing content, while also having the benefit of being less labor-intensive than human efforts.

“A lot of these social engineering campaigns require a lot of effort because you have to build those profiles,” she said. She believes AI tools can significantly lower the barrier to entry.

She said: "I believe ChatGPT can write very persuasive articles."

The nature of technological innovation means that whenever something new appears, there will always be someone trying to take advantage of it to achieve malicious purposes. Even if developers use the most innovative means to prevent abuse, the cunning nature of cybercriminals and fraudsters means they may find ways to circumvent protections.

Shykevich said: "There is no way to completely eliminate abuse to zero. This will not happen in any system." He hopes to highlight the potential cybersecurity issues around how to prevent artificial intelligence chatbots from being used More discussion for the wrong purpose.

He said, “ChatGPT is a great technology, but like any new technology, it has risks, and it’s important to discuss and be aware of those risks. I think the more we talk about it, the more open AI and Similar companies are more likely to invest more in reducing abuse."

Artificial intelligence chatbots (such as ChatGPT) also have benefits in cybersecurity. They are particularly good at processing and understanding code, so it is possible to use them to help defenders understand malware. Since they can also write code, by helping developers complete projects, these tools may help create better, more secure code faster, which benefits everyone.

Jeff Pollard, principal analyst at research firm Forrester, pointed out that ChatGPT can significantly reduce the time required to generate security incident reports.

He pointed out: "Turnaround the situation faster means more time to do other things - such as testing, assessment, investigation and response, all of which contribute to the scale of the security team." He added Say, the bot can suggest next steps based on available data.

"If security orchestration, automation and response are set up correctly to speed up the retrieval of artifacts, this can speed up detection and response and help security operations center analysts make better decisions," he said.

So chatbots may make life harder for some cybersecurity companies, but there may also be a silver lining.

Industry media contacted OpenAI for comment but did not receive a reply. However, reporters asked what rules ChatGPT has to prevent it from being abused for phishing. Got the following response: “It’s important to note that while AI language models like ChatGPT can generate text similar to phishing emails, they cannot perform malicious actions on their own and require user intent and action to cause harm. Therefore, Users should exercise caution and good judgment when using artificial intelligence technology, and be wary of phishing and other malicious activities."

The above is the detailed content of What is the development trend of artificial intelligence chatbots in the field of cybersecurity?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete