Home  >  Article  >  Technology peripherals  >  OpenAI, Microsoft, Zhipu AI and other 16 companies around the world signed the Frontier Artificial Intelligence Security Commitment

OpenAI, Microsoft, Zhipu AI and other 16 companies around the world signed the Frontier Artificial Intelligence Security Commitment

WBOY
WBOYOriginal
2024-06-03 22:24:44343browse

The security issues of artificial intelligence (AI) are being discussed globally with unprecedented attention.

Before OpenAI founder and chief scientist Ilya Sutskever and OpenAI super-alignment team co-leader Jan Leike left OpenAI one after another, Leike even published a series of posts on X, saying that OpenAI and its Leadership ignored safety in favor of glossy products. This has attracted widespread attention in the industry, highlighting the seriousness of current AI security issues to a certain extent.

On May 21, an article published in Science magazine called on world leaders to take stronger action against the risks of artificial intelligence (AI). The article pointed out that authoritative scientists and scholars, including Turing Award winners Yoshua Bengio, Geoffrey Hinton and Yao Qizhi, believe that the progress made in recent months is not enough. Their view is that artificial intelligence technology is developing rapidly, but there are many potential risks in the development and application of AI, including data privacy, abuse of artificial intelligence weapons, and the impact of artificial intelligence on the job market. Therefore, governments must strengthen supervision and legislation and formulate appropriate policies to manage and guide the development of artificial intelligence. In addition, the article also

OpenAI, Microsoft, Zhipu AI and other 16 companies around the world signed the Frontier Artificial Intelligence Security Commitment

We believe that the uncontrolled development of AI is likely to eventually lead to a large-scale destruction of life and the biosphere loss, and the marginalization or extinction of humanity.

In their view, the security issues of AI models have risen to a level that is enough to threaten the future survival of mankind.

Similarly, the security issue of AI models has become a topic that can affect everyone and everyone needs to be concerned about.

May 22 is destined to be a major moment in the history of artificial intelligence: OpenAI, Google, Microsoft and Zhipu AI and other companies from different countries and regions jointly signed the Frontier Artificial Intelligence Agreement Frontier AI Safety Commitments; the European Council has officially approved the Artificial Intelligence Act (AI Act), and the world's first comprehensive AI regulatory regulations are about to take effect.

Once again, the safety issue of AI is mentioned at the policy level.

Artificial Intelligence Seoul Summit "Declaration"

In the "Declaration" with the theme of "Safety, Innovation, and Inclusion" At the "AI Seoul Summit" (AI Seoul Summit), 16 companies from North America, Asia, Europe and the Middle East reached an agreement on security commitments for AI development and jointly signed a cutting-edge artificial intelligence security commitment, including the following points:

  • Responsible governance structure and transparency to ensure the safety of cutting-edge AI;
  • Based on the artificial intelligence safety framework, responsibly explain how the risks of cutting-edge AI models will be measured;
  • A clear process for establishing risk mitigation mechanisms for cutting-edge AI security models.

Turing Award winner Yoshua Bengio believes that the signing of the Frontier Artificial Intelligence Safety Pledge "marks an important step in establishing an international governance system to promote artificial intelligence safety." .

As a large model company from China, Zhipu AI has also signed this new cutting-edge artificial intelligence security commitment. The complete list of signatories is as follows:

OpenAI, Microsoft, Zhipu AI and other 16 companies around the world signed the Frontier Artificial Intelligence Security Commitment

In this regard, Anna Makanju, Vice President of Global Affairs at OpenAI, said, “The Frontier Artificial Intelligence Security Commitment is important to promote the wider implementation of advanced AI system security practices. "These commitments will help establish important cutting-edge AI security best practices among leading developers," said Tom Lue, general counsel and director of governance at Google DeepMind. Along with advanced technology comes the important responsibility of ensuring AI security."

Recently, Zhipu AI was also invited to appear at the top AI conference ICLR 2024, and presented a speech titled "The ChatGLM's. Road to AGI" shared their specific practices for AI safety in the keynote speech.

They believe that Superalignment technology will help improve the security of large models, and have launched a Superalignment program similar to OpenAI, hoping to let machines learn to learn and judge by themselves. This enables learning of safe content.

OpenAI, Microsoft, Zhipu AI and other 16 companies around the world signed the Frontier Artificial Intelligence Security Commitment

They revealed that these safety measures are built into the GLM-4V to prevent harmful or unethical behavior. Protect user privacy and data security at the same time; the subsequent upgraded version of GLM-4, namely GLM-4.5 and its upgraded model, should also be based on superintelligence and super alignment technology.

We also found that in a recently published paper, Zhipu AI and Tsinghua University teams introduced a feedback-free method that uses a large number of self-generated negative words. -free) Large language model alignment method - Self-Contrast.

According to the paper description, with only the supervised fine-tuning (SFT) target, Self-Contrast can use LLM itself to generate a large number of different candidate words, and use the pre-trained embedding model to determine the text similarity Filter multiple negative words.

OpenAI, Microsoft, Zhipu AI and other 16 companies around the world signed the Frontier Artificial Intelligence Security Commitment

## Paper link: https://arxiv.org/abs/2404.00604

Direct preference optimization (DPO) experiments on three datasets show that Self-Contrast can consistently outperform SFT and standard DPO training by a large margin. Moreover, the performance of Self-Contrast continues to improve as the number of self-generated negative samples increases.

OpenAI, Microsoft, Zhipu AI and other 16 companies around the world signed the Frontier Artificial Intelligence Security Commitment

Overall, this study provides insights into alignment in the absence of preference data (such as the RLHF method). A new approach. When preference data annotation is expensive and difficult to obtain, unlabeled SFT data can be used to construct grammatical preference data to make up for the performance loss caused by insufficient positive samples by increasing the number of negative samples.

The Council of the European Union formally approved the Artificial Intelligence Act

On the same day, the Council of the European Union also formally approved the Artificial Intelligence Act on the same day (AI Act), which is the world's first comprehensive regulation of AI. This landmark artificial intelligence regulation will take effect next month. It currently only applies to areas within the scope of EU law, or will be used in business and daily life. Technology sets a potential global benchmark.

“This landmark regulation, the first of its kind in the world, addresses a global technological challenge while creating new opportunities for our society and economy. Opportunities," Belgium's Digital Minister Mathieu Michel said in a statement.

This comprehensive AI legislation takes a “risk-based” approach, meaning the higher the risk of harm to society, the stricter the rules. For example, general-purpose AI models that do not pose systemic risks will be subject to some limited requirements, but those that do are subject to more stringent regulations.

Fines for violations of the Artificial Intelligence Act, which are set as a percentage of the offending company’s global annual turnover for the preceding fiscal year or a predetermined amount, whichever is higher shall prevail.

Nowadays, whether it is a small technology company or a large government agency, preventing and solving AI security issues has been put on the agenda. As Philip Torr, professor of engineering sciences at the University of Oxford, said: "At the last AI summit, the world agreed that we need to take action, but now it is time to move from vague recommendations to It turned into a concrete commitment.”

The above is the detailed content of OpenAI, Microsoft, Zhipu AI and other 16 companies around the world signed the Frontier Artificial Intelligence Security Commitment. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn