Home >Technology peripherals >AI >The risk of AI getting out of control increases: Open model weight triggers Meta protest

The risk of AI getting out of control increases: Open model weight triggers Meta protest

王林
王林forward
2023-10-11 08:37:01837browse

Editors: Du Wei, Xiaozhou

AI, especially open source and closed source in the era of large models, has pros and cons. The important thing is how to do a good job in the process of using it.

People have always been divided on the choice between open source and closed source in the field of AI. However, in the era of large models, the powerful force of open source has quietly emerged. According to a previously leaked internal document from Google, around open source models such as Meta’s LLaMA, the entire community is rapidly building models similar to OpenAI and Google’s large model capabilities.

There is no doubt that Meta is the absolute core of the open source world, with continued open source efforts such as the recent release of Llama 2. However, anything good will be destroyed by the wind. Recently, Meta has been in "trouble" due to open source.

Outside Meta’s offices in San Francisco, a group of protesters holding signs gathered to protest Meta’s strategy of publicly releasing AI models, claiming that these released models caused the “irreversible proliferation” of potentially unsafe technologies. Some protesters even compared the large models released by Meta to "weapons of mass destruction"

These protesters call themselves "concerned citizens" and are led by Holly Elmore. According to LinkedIn, she is an independent advocate for the AI ​​Pause movement.

The risk of AI getting out of control increases: Open model weight triggers Meta protest

The content that needs to be rewritten is: Picture source: MISHA GUREVICH

If a model proves to be unsafe, she noted that the API can be shut down. For example, companies like Google and OpenAI only allow users to access large models through APIs

In contrast, Meta's LLaMA series of open source models make model weights available to the public, allowing anyone with the right hardware and expertise to copy and tweak the model themselves. Once model weights are released, the publishing company no longer has control over how the AI ​​is used

In Holly Elmore's view, releasing model weights is a dangerous strategy. Anyone can modify the model, and these models cannot be undone. "The more powerful the model, the more dangerous this strategy is."

Compared to open source models, large models accessed through APIs often have various security features, such as response filtering or specific training to prevent the output of dangerous or annoying responses

If the model weights are released, it becomes much easier to retrain the model to jump over these "guardrails". This makes it more possible to use these open source models to create phishing software and conduct network attacks.

The risk of AI getting out of control increases: Open model weight triggers Meta protest

The content that needs to be rewritten is: Picture source: MISHA GUREVICH

She believes that part of the problem with model security is that the current security measures taken are not enough, so it is necessary to find a better way to ensure the security of the model

Currently, Meta has not made any comment on this. However, Meta’s chief AI scientist Yann LeCun seemed to respond to the statement that “open source AI must be outlawed” and showed the flourishing open source AI startup community in Paris

The risk of AI getting out of control increases: Open model weight triggers Meta protest

There are many people who hold different opinions and believe that an open strategy for AI development is the only way to ensure trust in technology, which is different from Holly Elmore’s view

Some netizens said that open source has pros and cons. It can allow people to gain greater transparency and enhance innovation, but it will also face the risk of abuse (such as code) by malicious actors.

The risk of AI getting out of control increases: Open model weight triggers Meta protest

As expected, OpenAI was ridiculed again, with some saying "it should return to open source."

The risk of AI getting out of control increases: Open model weight triggers Meta protest

There are many people who are worried about open source

Peter S. Park, a postdoctoral fellow in artificial intelligence security at MIT, said that the widespread release of advanced artificial intelligence models in the future may cause many problems because it is basically impossible to completely prevent the abuse of artificial intelligence models

However, Stella Biderman, executive director of the nonprofit artificial intelligence research organization EleutherAI, said: "So far, there is little evidence that the open source model has caused any specific harm. It is unclear whether just placing a model behind the API This can solve the security problem."

Biderman believes: "The basic elements of building an LLM have been disclosed in free research papers, and anyone can read these papers to develop their own models."

She further pointed out: "If companies encourage the confidentiality of model details, it may have serious adverse consequences for the transparency of field research, public awareness and scientific development, especially for independent researchers."

Although everyone is already discussing the impact of open source, it is still unclear whether Meta's method is really open enough and whether it can take advantage of open source.

Stefano Maffulli, executive director of the Open Source Initiative (OSI), said: "The concept of open source artificial intelligence has not been clearly defined. Different organizations use the term to refer to different things, indicating varying degrees of 'publicly available' stuff', which might confuse people."

The risk of AI getting out of control increases: Open model weight triggers Meta protest

Maffulli points out that for open source software, the key issue is whether the source code is publicly available and can be used for any purpose. However, reproducing an AI model may require sharing training data, data collection methods, training software, model weights, inference code, and more. Among them, the most important issue is that the training data may involve privacy and copyright issues

OSI has been working on a precise definition of “open source AI” since last year and is likely to release an early draft in the coming weeks. But no matter what, he believes that open source is crucial to the development of AI. "If AI is not open source, we cannot have trustworthy, responsible AI," he said

In the future, the differences between open source and closed source will continue, but open source is unstoppable.

The above is the detailed content of The risk of AI getting out of control increases: Open model weight triggers Meta protest. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:sohu.com. If there is any infringement, please contact admin@php.cn delete