Home  >  Article  >  Technology peripherals  >  18 countries jointly issue AI safety guidelines

18 countries jointly issue AI safety guidelines

王林
王林forward
2023-11-28 17:17:441486browse

18 countries jointly issue AI safety guidelines

The UK’s National Cyber ​​Security Center (NCSC) has released new guidance that can help developers and providers of AI systems “build AI systems that work as intended and are available when needed” , and work without leaking sensitive data to unauthorized parties."

How to put cybersecurity at the heart of AI systems

Guidelines for developing secure AI systems include ML Four Key Stages of the Application Development Lifecycle

Secure design depends on all involved - system owners, developers and users - being aware of the unique security risks faced by AI systems and being taught how to avoid them .

It is mentioned in the guidelines: "System threats should be modeled and the system should be designed from a security, functional and performance perspective." Additionally, developers should consider security benefits and trade-offs when selecting AI models ( It is not that the more complex the better)

Ensuring supply chain security is a prerequisite for achieving safe development. At the same time, it is necessary to protect assets (such as models, data, prompts, software, logs, etc.) and record models, data sets and elements. Tips for Data Systems, and Managing Technical Debt

Secure deployment requires secure infrastructure (in every part of the system lifecycle) and continuous protection of schema and data from direct and indirect access. In order to deal with (inevitable) security incidents, a comprehensive incident response, escalation and remediation plan must be developed.

AI should be released responsibly, meaning that it should be released only after its safety has been thoroughly evaluated and after users have evaluated limitations or potential failure modes

Ideally, the most Secure settings will be integrated into the system as the only option. When configuration is required, the default options should be generally safe against common threats (i.e., safe by default). You apply controls to prevent your system from being used or deployed in a malicious manner.

Ultimately, to secure operations and maintenance, operators are advised to monitor the behavior and inputs of their systems, enable automatic updates and maintain transparency and responsiveness, especially in the event of failures such as vulnerabilities

Who are the AI ​​Cybersecurity Guidelines for?

These guidelines were drafted with the help of the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and similar agencies and CERTs from around the world, as well as industry experts of.

“The new UK-led guidelines are the first to be agreed globally, and they will help developers of any system using AI make informed cybersecurity decisions at every stage of the development process. Decision making – whether these systems are created from scratch or built on tools and services provided by others,” the UK’s National Cyber ​​Security Center noted.

"[Guidelines] are primarily intended for providers of AI systems, whether based on models hosted by the organization or using external application programming interfaces. However, we urge all stakeholders (including data scientists, developers , managers, policymakers, and risk owners) read these guidelines to help them make informed decisions about the design, deployment, and operation of their machine learning AI systems.

Prior to issuing these guidelines, the President of the United States Biden issued an executive order to initiate actions aimed at protecting Americans from the potential risks of fraud, privacy threats, discrimination and other abuses from AI systems.

The above is the detailed content of 18 countries jointly issue AI safety guidelines. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete