Home  >  Article  >  Technology peripherals  >  ChatGPT "Collapse", which is popular all over the Internet: helps humans write phishing email codes, and it is World Cup themed!

ChatGPT "Collapse", which is popular all over the Internet: helps humans write phishing email codes, and it is World Cup themed!

PHPz
PHPzforward
2023-04-12 14:19:051471browse

​Produced by Big Data Digest

ChatGPT is fun, but something finally went wrong.

As an almost omnipotent language-based AI, ChatGPT can answer various questions, help you write articles, and can also help you write code.

Wait, write code?

If someone wants ChatGPT to write a malicious code to attack others, what will happen?

Recently, security researcher Dr. Suleyman Ozarslan, co-founder of Picus Security, recently successfully used ChatGPT to create a phishing code, which was actually World Cup themed.

Let's take a look together.

How to use ChatGPT to create ransomware and phishing emails

"We start with a simple exercise to see if ChatGPT can create a Credible phishing campaign and it turned out to be true. I entered a prompt to write a World Cup themed email for simulated phishing and it created a perfect English email in seconds." Suleyman Ozarslan explain.

Ozarslan tipped ChatGPT that he is a security researcher who simulates attacks. They want to develop a tool to simulate phishing attacks. Please help write a phishing email about the World Cup to simulate phishing. attack.

So, ChatGPT really wrote a paragraph.

ChatGPT Collapse, which is popular all over the Internet: helps humans write phishing email codes, and it is World Cup themed!

That said, while ChatGPT recognizes that “phishing attacks may be used for malicious purposes and may be harmful to individuals.” and causing harm to the organization,” but it still generated the email.

After completing this exercise, Ozarslan asked ChatGPT to write Swift code that can find Microsoft Office files on a MacBook and send them to a web server over HTTPS before encrypting them on the MacBook .

It went very smoothly, ChatGPT's solution generated the sample code without any warnings or prompts.

Ozarslan’s research practice shows that cybercriminals can easily bypass OpenAI’s protection, such as positioning themselves as researchers to obscure their malicious intentions.

The rise in the number of cyber crimes has unbalanced the balance

As can be seen from the above example, from a network security perspective, OpenAI’s The core challenge posed by creation is that anyone, regardless of technical expertise, can create code that generates malware and ransomware on demand.

While ChatGPT does provide positive benefits to security teams (such as AI screening emails), it does also lower the barrier to entry for cybercriminals, potentially accelerating the threat landscape. complexity rather than reducing it.

For example, cybercriminals can use artificial intelligence to increase the number of phishing threats in the wild, which not only already overwhelms security teams but can cause hundreds of losses in just one successful Million dollar data breach.

#IRONSCALES CVP of R&D at email security vendor Lomy Ovadia said: "When it comes to network security, ChatGPT offers far more to attackers than targets."

Ovadia said: "This is especially true for business email attacks (BEC), which rely on using deceptive content to impersonate colleagues, company VIPs, suppliers and even customers."

Ovadia believes that if CISOs and security leaders rely on policy-based security tools to detect phishing attacks with AI/GPT-3 generated content, they will be obsolete because these AI models use advanced natural language processing (NLP) to generate scam emails that are nearly impossible to distinguish from real examples.

For example, earlier this year, security researchers from Singapore’s Government Technology Agency created 200 phishing emails and compared the click-through rates with the deep learning model GPT- 3 emails created were compared and found that more users clicked on the AI-generated phishing emails than human users.

What’s the good news?

While generative AI does bring new threats to security teams, it also offers some positive use cases. For example, analysts can use the tool to check for vulnerabilities in open source code before deployment.

“Today, we’re seeing ethical hackers using existing artificial intelligence to help write vulnerability reports, generate code samples, and identify trends in large data sets. It’s all happening "The best application of artificial intelligence today is to help humans do more human things," said Dane Sherrets, solutions architect at HackerOne.

However, security teams trying to leverage generative AI solutions like ChatGPT still need to ensure adequate human oversight to avoid potential issues.

While the progress ChatGPT represents is exciting, the technology has not yet advanced to operate fully autonomously. For artificial intelligence to operate, it requires human supervision, some manual configuration, and is not always able to run and train on the absolutely latest data and intelligence.

It is for this reason that Forrester recommends that organizations implementing generative AI should deploy workflows and governance to manage AI-generated content and software to ensure its accuracy and reduce releases with security or performance issues solution possibilities.

Inevitably, the real risks of AI like ChatGPT will depend on who, on both sides of the AI ​​war, can exploit automation more effectively.

Related reports: https://venturebeat.com/security/chatgpt-ransomware-malware/

The above is the detailed content of ChatGPT "Collapse", which is popular all over the Internet: helps humans write phishing email codes, and it is World Cup themed!. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete