Home  >  Article  >  Technology peripherals  >  Ten ways ChatGPT and generative AI are strengthening Zero Trust

Ten ways ChatGPT and generative AI are strengthening Zero Trust

PHPz
PHPzforward
2023-05-16 14:34:061193browse

Ten ways ChatGPT and generative AI are strengthening Zero Trust

What kind of feedback are cybersecurity CEOs getting from customers?

CEOs of cybersecurity providers interviewed at the 2023 RSA Conference said their enterprise customers Acknowledging the value of ChatGPT in improving network security, it also expressed concerns about the risk of accidental leakage of confidential data and intellectual property. The Cloud Security Alliance (CSA) released the first-ever ChatGPT guidance document during the conference, calling on the industry to improve its artificial intelligence roadmap.

Connie Stack, CEO of NextDLP, said that her company investigated the use of ChatGPT by Next customers and found that 97% of large enterprises found that their employees used the tool. 10% of terminals on Next’s Reveal platform have accessed ChatGPT.

In an interview at RSA Conference 2023, Stack said, “This level of ChatGPT usage is of great concern to some of our customers as they evaluate this new data loss vector. Some Next Customers chose to disable it outright, including a healthcare company that could not accept any risk of leaking intellectual property and trade secrets to a public-facing generative large language model. Other companies were open to the potential benefits and chose to proceed cautiously Use ChatGPT to support things like enhanced data loss 'threat hunting' and support for security-related content creation."

Building a new cybersecurity muscle memory

Generative AI technology has the potential to improve The learning and productivity of threat analysts, threat hunters, and security operations center (SOC) staff are the main motivations for cybersecurity vendors rushing to adopt generative AI tools such as ChatGPT. Continuous learning needs to be embedded deep into an organization's threat defenses so they can rely on "muscle memory" to adapt, respond, and neutralize an intrusion attempt before it begins.

The most discussed topic at the 2023 RSA Conference was undoubtedly the newly released ChatGPT product and integration.

Of the 20 vendors announcing new products and integrations, the most notable are Airgap Networks, Google Security AI Workbench, Microsoft Security Copilot (launched ahead of the show), Recorded Future, Security Scorecard and SentinelOne .

Among them, Airgap’s Zero Trust Firewall (ZTFW) and ThreatGPT are particularly worthy of attention. It is designed to complement existing perimeter firewall infrastructure by adding dedicated micro-segmentation and access layers in the network core. Ritesh Agrawal, CEO of Airgap, said, “With highly accurate asset discovery, agentless micro-segmentation, and secure access, Airgap provides rich intelligence to combat evolving threats. What customers now need is an easy-to-exploit method that requires no programming.” This functional approach. That’s the beauty of ThreatGPT – the pure data mining intelligence of AI combined with a simple natural language interface. This will be a game changer for security teams.”

Among the 20 zero-trust startups, Airgap is recognized as one of the "most innovative engineering and product development teams." Airgap's ThreatGPT combines a graph database and GPT-3 model to provide previously unavailable cybersecurity insights. The company configured GPT-3 models to analyze natural language queries and identify potential security threats, while integrating graph databases to provide contextual intelligence of traffic relationships between endpoints.

How ChatGPT strengthens Zero Trust

One way generative AI strengthens Zero Trust is by identifying and hardening an enterprise’s most vulnerable threat surfaces. Earlier this year, Zero Trust creator John Kindervag suggested in an interview that "you start with a protected surface" and talked about what he called "the Zero Trust learning curve. You don't start with the technology, that's Misunderstanding."

Here are potential ways generative AI can enhance the core zero-trust framework defined in the NIST 800-207 standard:

1. Unify and learn threat analysis and incident response at the enterprise level

Chief Information Security Officers (CISOs) are looking to consolidate their technology stacks because there are so many conflicting systems around threat analysis, incident response and alerting systems, and SOC analysts aren’t sure what’s most urgent of. Generative AI and ChatGPT have proven to be powerful tools for integrating applications. They will ultimately provide CISOs with a single view of threat analysis and incident response across their infrastructure.

2. Quickly identify identity-based internal and external intrusion attempts through continuous monitoring

The core of zero trust is identity. Generative AI has the potential to quickly identify whether a given identity’s activity is consistent with its previous history.

CISOs believe that the most challenging intrusions to prevent often start from within, leveraging legitimate identities and credentials.

One of the core advantages of LLM (Large Language Model) is the ability to discover anomalies in data based on small sample sizes. This is great for protecting IAM, PAM and Active Directories. LLM has proven to be effective in analyzing user access logs and detecting suspicious activity.

3. Overcoming Micro-Segmentation’s Most Challenging Obstacles

The many challenges of doing micro-segmentation correctly can cause large micro-segmentation projects to be delayed for months or even years. While network micro-segmentation is designed to isolate defined segments within an enterprise network, it is rarely a one-and-done task.

Generative AI can help by determining how best to introduce micro-segmentation solutions without disrupting system and resource access. Best of all, it can potentially reduce the thousands of tickets created in IT service management systems by bad micro-segmentation projects.

4. Solve the security challenges of managing and protecting endpoints and identities

Attackers are always looking for vulnerabilities between endpoint security and identity management. Generative AI and ChatGPT can help solve this problem, giving threat hunters the intelligence they need to know which endpoints are most vulnerable to compromise.

To strengthen the “muscle memory” of security responses, especially when it comes to endpoints, generative AI can be used to continuously learn how attackers attempt to penetrate endpoints, their target points, and the identities they attempt to use.

5. Taking least privilege access to a whole new level

Applying generative AI to restrict access to resources by identity, system, and length of time is the most powerful zero-trust AI One of the enhanced use cases. Querying ChatGPT for audit data based on resource and permission profiles saves system administrators and SOC teams thousands of hours each year.

A core part of least privilege access is deleting obsolete accounts. Ivanti's 2023 State of Security Readiness Report found that 45% of businesses suspect former employees and contractors still have active access to company systems and files.

Dr. Srinivas Mukkamala, chief product officer at Ivanti, noted, “Large enterprises often fail to consider the vast ecosystem of applications, platforms, and third-party services that grant far-reaching access. Well beyond an employee's tenure. We call these 'zombie credentials,' and an alarming number of security professionals, even upper-level executives, still have access to their former employer's systems and data."

6. Fine-tune behavioral analytics , risk scoring, and real-time adjustments to security roles

Generative AI and ChatGPT will enable SOC analysts and teams to more quickly grasp anomalies discovered by behavioral analytics and risk scoring. They can then immediately block any lateral movement attempted by potential attackers. Defining privileged access solely through risk scores will become obsolete, generative AI will put requests into context and send alerts to its algorithms to identify potential threats.

7. Improved real-time analytics, reporting and visibility to help stop online fraud

Most successful zero trust initiatives are built on data integration, which aggregates and Report real-time analytics, reporting and visibility. Enterprises can use this data to train generative AI models, providing unprecedented insights to SOC threat hunters and analysts.

The results will be immediately measurable in terms of stopping e-commerce fraud, as attackers target e-commerce systems that cannot keep up with the pace of attacks. Threat analysts with ChatGPT access history data will immediately know whether flagged transactions are legitimate.

8. Improve context-aware access and enhance fine-grained access control

Another core component of zero trust is access control granularity based on identity, assets, and endpoints. Look to generative AI to create new workflows that can more accurately detect a combination of network traffic patterns, user behavior and contextual intelligence to recommend policy changes based on identity, role. Threat hunters, SOC analysts, and fraud analysts will learn about every abused privileged access credential in seconds and be able to restrict all access with a simple ChatGPT command.

9. Strengthen configuration and compliance to make it more compliant with zero trust standards

The LLM model on which ChatGPT is based has proven effective in improving anomaly detection and simplifying fraud detection. The next step in this area is to leverage the ChatGPT model to automate access policy and user group creation and stay informed about compliance with the real-time data generated by the model. ChatGPT will greatly improve the efficiency of configuration management, risk governance and compliance reporting.

10. Limit the radius of phishing attacks

This is the threat surface that attackers thrive on - using social engineering to trick victims into paying large sums of cash. ChatGPT has proven to be very effective in natural language processing (NLP) and combined with its LLM can effectively detect anomalous text patterns in emails. These patterns are often hallmarks of business email compromise (BEC) scams. ChatGPT can also detect and identify AI-generated emails and send them to quarantine. Generative AI is being used to develop next-generation cyber resilience platforms and detection systems.

Focused on turning the disadvantages of Zero Trust into advantages

ChatGPT and generative AI can respond to changing threat intelligence and security knowledge by strengthening the "muscle memory" of enterprise Zero Trust security. challenge. It’s time to think of these technologies as learning systems that help enterprises continuously improve their cybersecurity automation and human skills to defend against external and internal threats by logging and inspecting all network traffic, restricting and controlling access, and authenticating and protecting network resources.

The above is the detailed content of Ten ways ChatGPT and generative AI are strengthening Zero Trust. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete