Home >Technology peripherals >AI >Build a generative AI innovation security system, Amazon's chief security officer teaches you three tips
Amazon Cloud Technology has millions of customers around the world and tracks billions of events every day, which allows Amazon Cloud Technology to detect more security threats.
In 2019, Amazon Cloud Technology Chief Security Officer Steve Schmidt officially announced the launch of Amazon Cloud Technology re:Inforce, the first conference focusing on cloud security issues. The conference has now been held for five times and has become a benchmark for cloud security.
In 2010, Steve Schmidt joined Amazon and served as the chief information security officer of Amazon Cloud Technology for 12 years. He has served as the chief security officer of Amazon since 2022. Recently, he was interviewed by The Wall Street Journal about enterprise security in the age of generative AI.
Steve Schmidt said that the job of the security team is to help enterprises understand the benefits and risks of innovative technologies such as AI, and how to use it to improve the security efficiency of the enterprise.
Amazon Cloud Technology Chief Security Officer Steve Schmidt
Steve Schmidt believes, When any business talks about the security of and uses functional AI, it must ask itself three questions: Can the AI be abused, hacked, or tampered with?
First, where is the data?
Enterprises need to understand the entire workflow of training models with data, where the data comes from, and how it is processed and protected.
Second, what happens to my query and any related data?
Training data is not the only sensitive data set that enterprises need to pay attention to. When enterprises and their users start to use generative AI and large language models, they will quickly learn how to make queries more effective. More details and specific requirements will then be added to the query, resulting in better results. Enterprises using generative AI for queries need to clearly understand how generative AI will process the data input into the model and the query results. Enterprise queries themselves are also sensitive and should be part of a data protection plan.
Third, is the output of the generative AI model accurate enough?
From a safety perspective, the usage scenarios of generative AI define risks, and different scenarios have different accuracy requirements. If you are using a large language model to generate custom code, then you have to make sure that the code is well written, follows your best practices, etc.
After understanding the security issues of using generative AI, Steve Smith also gave three security suggestions for using generative AI to innovate:
First, the security team said "No" is easy, but it's not the right thing to do. Train internal staff to understand company policies on the use of artificial intelligence to help use it safely. Instruct employees to use methods that are consistent with the company’s AI use policy. It's easy for the security team to say "no," but it's just as easy for all business teams, developers, etc., to bypass the security team.
Second, visibility. Businesses need visibility tools to understand how employees use data, limit access to data outside of work requirements, and monitor how they use external services to access this data. If any non-compliance with the policy is found, such as access to sensitive data outside of non-work requirements, this behavior will be stopped. In other cases, if the data an employee is using is less sensitive but may violate policy, the employee will be proactively contacted to understand the true purpose and seek a solution.
Third, solve problems through mechanisms. Mechanisms are reusable tools that allow businesses to drive specific behaviors precisely over time. For example, when an employee operates in violation of regulations, the system will prompt the employee through a pop-up window, recommend the use of specific internal tools, and report related issues.
Safety has always been the highest priority of Amazon Cloud Technology, and Steve Schmidt is also one of the practitioners and advocates of Amazon's security culture.
"Security teams should use out-of-the-box generative AI applications to promote the security industry upgrade from the code stage. This is true for any enterprise, including Amazon." This is Steve Schmidt's suggestion for generative AI. .
Because using generative AI to improve the writing of secure code can effectively push the entire industry into a higher-level security field.
The Amazon CodeWhisperer code assistant and the new generative AI assistant Amazon Q can both help enterprises generate better code or provide suggestions directly as software engineers write code.
Amazon CodeWhisperer is an AI coding assistant with built-in security scanning capabilities. It can help developers generate code based on annotations, track open source references, scan for vulnerabilities, and is free for individual developers. It also provides customization capabilities that allow enterprises to securely connect CodeWhisperer to the enterprise's internal code storage to improve the effectiveness of CodeWhisperer and speed up the completion of development tasks.
Amazon Q can answer various code-related questions from developers in Amazon CodeWhisperer and attach codes that can be implemented with one click, and provide code conversion functions to help developers significantly reduce application maintenance and upgrades The tedious work required reduces the time required from days to minutes.
At the same time, at the Amazon Cloud Technology 2023 re:Invent Global Conference, Amazon Cloud Technology also launched security service capabilities with generative AI capabilities.
Amazon Inspector’s Amazon Lambda function code scanning feature now leverages generative AI and automated reasoning to enable code fixes. While previous scanning capabilities highlighted the location of problematic code, potential impact, and provided recommendations, generative AI can now also create contextually relevant code patches for multiple classes of vulnerabilities. Developers can quickly perform operations such as verification and code replacement to solve problems quickly and effectively.
Amazon Detective is now able to provide discovery group summaries using generative AI, which automatically analyzes discovery groups and provides insights in natural language to accelerate security investigations. The purpose of the Amazon Detective service is to make it easier for users to analyze, investigate, and quickly determine the root cause of potential security issues or suspicious activity. New generative AI capabilities can provide specific users with a broader security perspective and more security knowledge.
Recently, Amazon Cloud Technology and NVIDIA also mentioned in the latest joint release that GB200 will benefit from the enhanced security of the Amazon Nitro system to fully protect customer code and data during processing on the client and in the cloud. Safety.
Steve Schmidt said it's not about what the security team itself wants to do, it's about making sure that we're always helping our customer teams, our internal teams, move forward toward their goals.
The above is the detailed content of Build a generative AI innovation security system, Amazon's chief security officer teaches you three tips. For more information, please follow other related articles on the PHP Chinese website!