Home > Article > Technology peripherals > Directly at the RSAC site: Artificial intelligence tools have become the hottest topic this year!
There was a hot topic at this year’s RSAC conference in San Francisco: artificial intelligence tools. The potential of generative artificial intelligence in cybersecurity tools has sparked interest among cybersecurity professionals. But questions have been raised about the practical application of AI in cybersecurity, as well as the reliability of the data used to build AI models.
M.K. Palmore, a cybersecurity strategic consultant and board member of Google Cloud and Cyversity, said in an interview: We are currently in the first round of fighting with artificial intelligence, and we don’t yet know what the impact of artificial intelligence on the cybersecurity industry will be. How big it is, we don’t know what the final results will be. But we are hopeful that the entire company is currently moving in a direction, which also shows that we see the value and use of artificial intelligence to positively impact the industry.
However, Palmore also admitted that we do still have more to do in the development of artificial intelligence. He believes that as things change and develop, we must all shift to adapt to this new model and make these large language models (llm) and artificial intelligence available to us. Dan Lohrmann, chief information security officer at Presidio, believes that artificial intelligence is still in its early stages in cybersecurity. When the topic of artificial intelligence tools was mentioned at the RSAC conference, he believed that this would be a revolution, and artificial intelligence would change a large part of the products. This could change the offense and defense, just like how we build our red and blue teams.
In addition, he also pointed out that there is still a long way to go in streamlining the tools used by security teams. He said: "I don't think we will ever achieve a single pane of resource monitoring and management, but this is the most streamlined level I have ever seen.
During the 2023 RSAC conference, many companies talked about how they use generative artificial intelligence in security tools. For example, Google launched its generative artificial intelligence tool and security LLM, Sec-PaLM. Sec-PaLM is built on Mandiant’s cutting-edge intelligence on vulnerabilities, malware, threat indicators and behavioral threat actor profiles.
Stephen Hay, director of user experience at Google Cloud, said LLMs now A tipping point where it can contextualize information in a way that wasn’t possible before. This means we now have truly generative artificial intelligence.
Meanwhile, Amazon Web Services Mark Ryland, Director of the Office of the Chief Information Security Officer of Services) emphasized how to use generative artificial intelligence to detect threat activities. Pay more attention to meaningful data in daily life and minimize false positives. And the only way to do this effectively is to train the machine more Learning is also at the core of our security services.
The company recently announced a new tool called Amazon Bedrock for building on AWS that incorporates generative artificial intelligence. Amazon Bedrock is A new service that provides API access to foundational models (fm) from AI21 Labs, Anthropic, Stability AI and Amazon. In addition, Tenable has launched a generative AI security tool designed specifically for the research community. Additionally, a document titled The recently released report "How Generative Artificial Intelligence is Transforming Security Research" explores how LLMs can reduce complexity and improve security in research areas such as reverse engineering, debugging code, improving web application security and cloud tool visibility. Efficiency.
The report states that ChatGPT is developing at an "alarming pace." Regarding the artificial intelligence tools in the cybersecurity platform, Tenable Chief Security Officer Bob Huber said that these tools allow you to build a database, for example, If you're looking for a penetration test and objective is , he added that he is already seeing some companies starting to use LLMs. He also pointed out that the data that LLMs are based on is not necessarily verified or accurate, so guardrails need to be put in place about this. Therefore, LLMs built with their own data , more trustworthy.
Some people worry that being linked to LLMs like GPT may affect security. As a security practitioner, it is important to understand the risks. But Huber points out that people haven’t had enough time to understand the risks of generative AI. These tools are all designed to make defenders' jobs easier, but Ismael Valenzuela, BlackBerry's vice president of threat research and intelligence, noted the limitations of generative AI.
Like any other tool that we as defenders use, other attackers will use it too. So the best way to use these generative AI tools is as an assistant. It's clear that it really helps us grow. But if you expect it to completely change everything, the answer is no.
The above is the detailed content of Directly at the RSAC site: Artificial intelligence tools have become the hottest topic this year!. For more information, please follow other related articles on the PHP Chinese website!