Home  >  Article  >  Technology peripherals  >  Six ways to build AI chatbots and large language models to enhance cybersecurity

Six ways to build AI chatbots and large language models to enhance cybersecurity

王林
王林forward
2023-06-06 11:10:531358browse

From a risk perspective, generative AI chatbots and large language models can be a double-edged sword, but if used correctly, they can also improve cybersecurity in key ways.

The meteoric rise of ChatGPT, developed by OpenAI, is one of the biggest stories of the year, and the potential impact of generative AI chatbots and large language models on cybersecurity is a key area of ​​discussion. There's a lot of discussion about the security risks these new technologies can pose, from concerns about sharing sensitive business information with advanced self-learning algorithms to malicious actors using them to significantly enhance attacks.

Some countries, states and enterprises have banned the use of generative artificial intelligence technologies such as ChatGPT on the grounds of data security, protection and privacy. Clearly, the security risks posed by generative AI chatbots and large, large language models are considerable. However, there are many ways in which generative AI chatbots can enhance an enterprise’s cybersecurity, giving security teams a much-needed boost in the fight against cybercrime.

Here are 6 ways generative AI chatbots and large language models can improve safety.

Vulnerability Scanning and Filtering

According to a Cloud Security Alliance (CSA) report exploring the cybersecurity impact of large language models, generative artificial intelligence Intelligent models can be used to significantly enhance scanning and filtering of security vulnerabilities. In the paper, the Cloud Security Alliance (CSA) demonstrated that OpenAI’s Codex API is an effective vulnerability scanner for programming languages ​​such as C, C#, Java, and JavaScript. "We can foresee that large language models, like those in the Codex family, will become a standard part of vulnerability scanners in the future," the paper reads. For example, scanners could be developed to detect and flag unsafe code patterns in various languages, helping developers address potential vulnerabilities before they become critical security risks.

As for filtering, generative AI models can interpret and add valuable context to threat identifiers that might otherwise be missed by human security personnel. For example, the technical identifier TT1059.001 in the MITRATT&CK framework may be reported but will be unfamiliar to some cybersecurity professionals and therefore requires a brief explanation. ChatGPT can accurately identify the code as a MITRATT&CK identifier and provide explanations of specific issues related to it, which involve the use of malicious PowerShell scripts. It also details the nature of PowerShell and its potential use in cybersecurity attacks, and provides relevant examples.

In May of this year, OXSecurity announced the launch of OX-gpt, a ChatGPT integration designed to help developers provide customized code fix suggestions and cut-paste code fixes, including how the code Being exploited by hackers, the possible impact of the attack and the potential damage to the organization.

Reverse the add-on and analyze the API of the PE file

Matt Fulmer, manager of network intelligence engineering at DeepInstinct, said that based on reverse engineering frameworks such as IDA and Ghidra , generative AI/large language model (LLM) technology can be used to help build rules and reverse popular add-ons. "If you clarify your requirements and compare them to MITER's attacks and attack strategies, you can take the results offline and better use them as a defense."

llm can also talk through applications, analyzing the APIs of portable executables (PEs) and telling you what they are used for, he added. "This can reduce the time security researchers spend reviewing PE files and analyzing API communications within them."

Threat Search Query

According to According to the CSA, security defenders can increase efficiency and speed up response times by leveraging ChatGPT and other LLMs to create threat search queries. By generating queries for malware research and detection tools such as YARA, ChatGPT helps quickly identify and mitigate potential threats, allowing defenders to focus on critical aspects of their cybersecurity efforts. This capability has proven invaluable in maintaining a robust security posture in an evolving threat environment. Rules can be customized based on specific needs and the threats an organization wishes to detect or monitor in its environment.

Artificial intelligence can improve supply chain security

Generative artificial intelligence models can address supply chains by identifying potential vulnerabilities in suppliers Security Risk. In April this year, SecurityScorecard announced the launch of a new security rating platform that achieves this goal by integrating with OpenAI’s GPT-4 system and natural language global search. According to the company, customers can ask open-ended questions about their business ecosystem, including vendor details, and quickly receive answers to drive risk management decisions. For example, “Find my 10 lowest-rated vendors” or “Show which of my key vendors have been compromised in the past year” – SecurityScorecard claims these questions will produce results that enable teams to quickly make risk management decisions decision making.

Detect generated AI text in attacks

According to the CSA, large language models not only generate text, but also work on detecting and watermarking AI-generated text, which may become a common feature of email protection software. The CSA said that identifying AI-generated text in attacks can help detect phishing emails and polymorphic code, and it can be assumed that llm can easily detect atypical email address senders or their corresponding domains, while being able to examine the underlying layers in the text. Whether the link leads to a known malicious website.

Secure code generation and transmission

llm like ChatGPT can be used to generate and transmit security codes. The CSA cited the example of a phishing campaign that successfully targeted several employees within the company, potentially exposing their credentials. While it is known which employees opened the phishing emails, it is not clear whether they inadvertently executed malicious code designed to steal their credentials.

To investigate this issue, you can use Microsoft365Defender advanced search queries to find the last 10 login events performed by email recipients within 30 minutes of receiving a known malicious email. This query helps identify any suspicious login activity that may be related to compromised credentials. ”

Here, ChatGPT can provide Microsoft365Defender search queries to check login attempts for compromised email accounts, which can help prevent attackers from entering the system and clarify whether the user needs Change password. This is a great example of reducing time to action during cyber incident response.

Based on the same example, you may run into the same issue and find the Microsoft365Defender lookup query , but your system does not use the KQL programming language. Instead of searching for the correct example in the language you want, you can do a programming language style conversion.

He said, "This example illustrates ChatGPT's underlying Codex model can take a source code example and generate the example in another programming language. It also simplifies the process for the end user by adding key details in the answers it provides and the methodology behind new creations. ”

According to the CSA, security defenders can increase efficiency and speed up response times by leveraging ChatGPT and other LLMs to create threat search queries. By providing malware research and detection tools ( Generating queries like YARA), ChatGPT helps quickly identify and mitigate potential threats, allowing defenders to focus on critical aspects of their network security efforts. This capability has proven to be essential to maintaining robust security in an evolving threat environment Posture is priceless. Rules can be customized based on specific needs and the threats an organization wishes to detect or monitor in its environment.

AI Can Improve Supply Chain Security

Generative artificial intelligence models can address supply chain security risks by identifying potential vulnerabilities in suppliers. In April this year, SecurityScorecard announced the launch of a new security rating platform through a partnership with OpenAI’s GPT-4 Systems and natural language global search are integrated to achieve this. According to the company, customers can ask open-ended questions about their business ecosystem, including supplier details, and quickly receive answers to drive risk management decisions. For example , “Find my 10 lowest-rated vendors” or “Show which of my key vendors have been compromised in the past year” – SecurityScorecard claims these questions will produce results that enable teams to make quick risk management decisions .

Generating AI text in detection attacks

According to CSA, large language models not only generate text but also work on detection and watermarking AI-generated text, which may become a common feature of email protection software. The CSA said that identifying AI-generated text in attacks can help detect phishing emails and polymorphic code, and it can be assumed that llm can easily detect atypical emails. Address the sender or its corresponding domain, while being able to check whether underlying links in the text point to known malicious websites.

Secure code generation and transmission

LLMs like ChatGPT can be used to generate and transmit secure codes. The CSA cited an example of a phishing campaign that successfully targeted several employees within the company, potentially exposing their credentials. While it is known which employees opened the phishing emails, it is not clear whether they inadvertently executed malicious code designed to steal their credentials.

To investigate this issue, you can use Microsoft 365 Defender Advanced Search query to find the last 10 login events performed by email recipients within 30 minutes of receiving a known malicious email. This query helps identify any suspicious login activity that may be related to compromised credentials."

Here, ChatGPT can provide Microsoft365Defender search queries to check login attempts for compromised email accounts, which can help prevent attackers from entering the system and clarify whether the user needs to change their password. This is a great example of reducing time to action during cyber incident response.

Based on the same example, you may encounter the same problem and find the Microsoft365Defender lookup query, but your system does not use the KQL programming language. Instead of searching for the right example in the language you want, you can do programming language style shifting.

"This example illustrates how ChatGPT's underlying Codex model can take a source code example and generate the example in another programming language. It also does so through the methods behind the answers and new creations it provides. Adding critical details simplifies the process for the end user.” Leaders must ensure the safe use of generative AI chatbots

# As with many modern technologies, Chaim Mazal, chief strategy officer at Gigamon, said , from a risk perspective, artificial intelligence and large language models can be a double-edged sword, so leaders must ensure that their teams use these products safely and reliably. "Security and legal teams should work together to find the best path forward for their organizations to leverage the capabilities of these technologies without compromising intellectual property or security."

Fuer Generative AI is based on outdated structured data, so it should only be used as a starting point when evaluating its applications in security and defense, Mo said. For example, if it is used for any of the above benefits, its output needs to be justified. Put the output offline and let people make it better, more accurate, and more actionable. ”

######Over time, generative AI chatbots/large language models will eventually naturally enhance security and defense capabilities, but using AI/large language models to help Rather than harming the cybersecurity posture, it will ultimately all come down to internal communication and response. To Mazal said. “Generative AI/large language models can be a part of enabling stakeholders to comprehensively address security issues in a faster and more effective way.” means. Leaders must communicate how tools can be leveraged to support organizational goals while educating them on potential threats. ”############Joshua Kaiser, director of artificial intelligence technology and CEO of TovieAI, said that artificial intelligence chatbots also need to be updated regularly to maintain an effective defense against threats, and human supervision is essential to ensure It is critical for large language models to function properly. He said, "In addition, large language models need to understand the scenario to provide accurate responses and catch any security issues, and should be tested and evaluated regularly to identify potential weaknesses or vulnerabilities." ”###

The above is the detailed content of Six ways to build AI chatbots and large language models to enhance cybersecurity. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete