Home  >  Article  >  Technology peripherals  >  The world's first "AI System Security Development Guidelines" were released, proposing four aspects of safety supervision requirements

The world's first "AI System Security Development Guidelines" were released, proposing four aspects of safety supervision requirements

WBOY
WBOYforward
2023-11-28 14:34:43923browse

The worlds first AI System Security Development Guidelines were released, proposing four aspects of safety supervision requirements

On November 26, 2023, the cybersecurity regulatory authorities of 18 countries including the United States, the United Kingdom, and Australia jointly issued the world's first "AI System Security Development Guidelines" to achieve Protect AI models from malicious tampering and urge AI companies to pay more attention to "security by design" when developing or using AI models.

The worlds first AI System Security Development Guidelines were released, proposing four aspects of safety supervision requirements

The U.S. Cybersecurity and Infrastructure Security Agency (CISA), as one of the main participants, stated that the world is experiencing an inflection point in the rapid development of AI technology, and AI technology is likely to be The most impactful technology today. However, ensuring cybersecurity is key to building safe, reliable and trustworthy AI systems. To this end, we have united the cybersecurity regulatory authorities of multiple countries and cooperated with technical experts from companies such as Google, Amazon, OpenAI, and Microsoft to jointly write and publish this guideline, aiming to improve the security of AI technology applications

It is understood that this guideline is the world’s first guidance document for the development safety of AI systems issued by an official organization. The guidelines clearly require that AI companies must prioritize ensuring safe results for customers, actively advocate transparency and accountability mechanisms for AI applications, and build the organization's management structure with safety design as the top priority. The guidelines aim to improve the cybersecurity of AI and help ensure the safe design, development and deployment of AI technology.

In addition, based on the U.S. government’s long-standing experience in cybersecurity risk management, the guidelines require all AI R&D companies to conduct sufficient testing before publicly releasing new AI tools to ensure that security measures have been taken. Measures to minimize social harms (such as prejudice and discrimination) and privacy concerns. The guidelines also require AI R&D companies to commit to facilitating third parties to discover and report vulnerabilities in their AI systems through a bug bounty system, so that vulnerabilities can be quickly discovered and repaired

Specifically, the guidelines released this time are for AI systems Security development puts forward four major regulatory requirements:

1. Give priority to "security by design" and "security by default"

AI development companies have repeatedly emphasized "security by design" in their guidelines ” and the “safe by default” principle. This means they should proactively take measures to protect AI products from attacks. To comply with the guidelines, AI developers should prioritize safety in their decision-making processes and not just focus on product functionality and performance. The guidelines also recommend that products provide the safest default application option and clearly communicate to users the risks of overriding that default configuration. Furthermore, as required by the Code, developers of AI systems should be responsible for downstream application outcomes, rather than relying on customers to control security

Excerpt from the request: “The user (whether the or integrating external AI components) often lack sufficient visibility and expertise to fully understand, assess, or address the risks associated with the AI ​​systems they are using. Therefore, in accordance with the 'safe by design' principle, providers of AI components Should be responsible for the security consequences of users downstream of the supply chain.”

2. Pay close attention to complex supply chain security

AI tool developers often rely on third-party components when designing their own products, such as basic Models, training datasets and APIs. A large supplier network will bring a larger attack surface to the AI ​​​​system, and any weak link in it may have a negative impact on the security of the product. Therefore, the guidelines require developers to fully assess the security risks when deciding to reference third-party components. When working with third parties, developers should review and monitor the vendor's security posture, require vendors to adhere to the same security standards as their own organization, and implement scanning and quarantine of imported third-party code.

Excerpt from the request: “Developers of mission-critical systems are required to be prepared to switch to alternative solutions if third-party components do not meet security standards. Businesses can use NCSC’s Supply Chain Guidance Resources such as the Software Artifact Supply Chain Level (SLSA), which tracks supply chain and software development life cycle certifications.”

3. Consider the unique risks faced in AI applications

AI systems will generate some unique threats (such as prompt injection attacks and data poisoning) when applied, so developers need to fully consider the unique security factors of AI systems. An important component of a "secure by design" approach to AI systems is to set up safety guardrails for AI model output to prevent the leakage of sensitive data and limit the operation of AI components used for tasks such as file editing. Developers should incorporate AI-specific threat scenarios into pre-release testing and monitor user input for malicious attempts to exploit the system.

Required excerpt: "The term 'adversarial machine learning' (AML) is used to describe the exploitation of security vulnerabilities in machine learning components, including hardware, software, workflows, and supply chains. AML enables Attackers can induce unexpected behaviors in machine learning systems, including: affecting the classification or regression performance of the model, allowing users to perform unauthorized operations, and extracting sensitive model information."

4. AI system Security development should be continuous and collaborative

The guidelines outline the best security practices for the entire life cycle stages of AI system design, development, deployment, operation and maintenance, and emphasize the continuous monitoring of deployed AI systems. Importance in order to spot model behavior changes and suspicious user input. The principle of "security by design" is a key component of any software update, and the guidelines recommend that developers automatically update by default. Finally, it is recommended that developers take advantage of the vast AI community feedback and information sharing to continuously improve the security of the system

Excerpt from the request: "When needed, AI system developers can Escalate the problem to the larger community, such as issuing an announcement in response to a vulnerability disclosure, including a detailed and complete enumeration of common vulnerabilities. When a security issue is discovered, developers should take action to mitigate and fix the problem quickly and appropriately."

Reference link:

The content that needs to be rewritten is as follows: In November 2023, the United States, the United Kingdom and global partners issued a statement

4 of the Global Artificial Intelligence Safety Guidelines Key points

The above is the detailed content of The world's first "AI System Security Development Guidelines" were released, proposing four aspects of safety supervision requirements. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete