


The risk of AI getting out of control increases: Open model weight triggers Meta protest
Editors: Du Wei, Xiaozhou
AI, especially open source and closed source in the era of large models, has pros and cons. The important thing is how to do a good job in the process of using it.
People have always been divided on the choice between open source and closed source in the field of AI. However, in the era of large models, the powerful force of open source has quietly emerged. According to a previously leaked internal document from Google, around open source models such as Meta’s LLaMA, the entire community is rapidly building models similar to OpenAI and Google’s large model capabilities.
There is no doubt that Meta is the absolute core of the open source world, with continued open source efforts such as the recent release of Llama 2. However, anything good will be destroyed by the wind. Recently, Meta has been in "trouble" due to open source.
Outside Meta’s offices in San Francisco, a group of protesters holding signs gathered to protest Meta’s strategy of publicly releasing AI models, claiming that these released models caused the “irreversible proliferation” of potentially unsafe technologies. Some protesters even compared the large models released by Meta to "weapons of mass destruction"
These protesters call themselves "concerned citizens" and are led by Holly Elmore. According to LinkedIn, she is an independent advocate for the AI Pause movement.
The content that needs to be rewritten is: Picture source: MISHA GUREVICH
If a model proves to be unsafe, she noted that the API can be shut down. For example, companies like Google and OpenAI only allow users to access large models through APIs
In contrast, Meta's LLaMA series of open source models make model weights available to the public, allowing anyone with the right hardware and expertise to copy and tweak the model themselves. Once model weights are released, the publishing company no longer has control over how the AI is used
In Holly Elmore's view, releasing model weights is a dangerous strategy. Anyone can modify the model, and these models cannot be undone. "The more powerful the model, the more dangerous this strategy is."
Compared to open source models, large models accessed through APIs often have various security features, such as response filtering or specific training to prevent the output of dangerous or annoying responses
If the model weights are released, it becomes much easier to retrain the model to jump over these "guardrails". This makes it more possible to use these open source models to create phishing software and conduct network attacks.
The content that needs to be rewritten is: Picture source: MISHA GUREVICH
She believes that part of the problem with model security is that the current security measures taken are not enough, so it is necessary to find a better way to ensure the security of the model
Currently, Meta has not made any comment on this. However, Meta’s chief AI scientist Yann LeCun seemed to respond to the statement that “open source AI must be outlawed” and showed the flourishing open source AI startup community in Paris
There are many people who hold different opinions and believe that an open strategy for AI development is the only way to ensure trust in technology, which is different from Holly Elmore’s view
Some netizens said that open source has pros and cons. It can allow people to gain greater transparency and enhance innovation, but it will also face the risk of abuse (such as code) by malicious actors.
As expected, OpenAI was ridiculed again, with some saying "it should return to open source."
There are many people who are worried about open source
Peter S. Park, a postdoctoral fellow in artificial intelligence security at MIT, said that the widespread release of advanced artificial intelligence models in the future may cause many problems because it is basically impossible to completely prevent the abuse of artificial intelligence models
However, Stella Biderman, executive director of the nonprofit artificial intelligence research organization EleutherAI, said: "So far, there is little evidence that the open source model has caused any specific harm. It is unclear whether just placing a model behind the API This can solve the security problem."
Biderman believes: "The basic elements of building an LLM have been disclosed in free research papers, and anyone can read these papers to develop their own models."
She further pointed out: "If companies encourage the confidentiality of model details, it may have serious adverse consequences for the transparency of field research, public awareness and scientific development, especially for independent researchers."
Although everyone is already discussing the impact of open source, it is still unclear whether Meta's method is really open enough and whether it can take advantage of open source.
Stefano Maffulli, executive director of the Open Source Initiative (OSI), said: "The concept of open source artificial intelligence has not been clearly defined. Different organizations use the term to refer to different things, indicating varying degrees of 'publicly available' stuff', which might confuse people."
Maffulli points out that for open source software, the key issue is whether the source code is publicly available and can be used for any purpose. However, reproducing an AI model may require sharing training data, data collection methods, training software, model weights, inference code, and more. Among them, the most important issue is that the training data may involve privacy and copyright issues
OSI has been working on a precise definition of “open source AI” since last year and is likely to release an early draft in the coming weeks. But no matter what, he believes that open source is crucial to the development of AI. "If AI is not open source, we cannot have trustworthy, responsible AI," he said
In the future, the differences between open source and closed source will continue, but open source is unstoppable.
The above is the detailed content of The risk of AI getting out of control increases: Open model weight triggers Meta protest. For more information, please follow other related articles on the PHP Chinese website!

Introduction In prompt engineering, “Graph of Thought” refers to a novel approach that uses graph theory to structure and guide AI’s reasoning process. Unlike traditional methods, which often involve linear s

Introduction Congratulations! You run a successful business. Through your web pages, social media campaigns, webinars, conferences, free resources, and other sources, you collect 5000 email IDs daily. The next obvious step is

Introduction In today’s fast-paced software development environment, ensuring optimal application performance is crucial. Monitoring real-time metrics such as response times, error rates, and resource utilization can help main

“How many users do you have?” he prodded. “I think the last time we said was 500 million weekly actives, and it is growing very rapidly,” replied Altman. “You told me that it like doubled in just a few weeks,” Anderson continued. “I said that priv

Introduction Mistral has released its very first multimodal model, namely the Pixtral-12B-2409. This model is built upon Mistral’s 12 Billion parameter, Nemo 12B. What sets this model apart? It can now take both images and tex

Imagine having an AI-powered assistant that not only responds to your queries but also autonomously gathers information, executes tasks, and even handles multiple types of data—text, images, and code. Sounds futuristic? In this a

Introduction The finance industry is the cornerstone of any country’s development, as it drives economic growth by facilitating efficient transactions and credit availability. The ease with which transactions occur and credit

Introduction Data is being generated at an unprecedented rate from sources such as social media, financial transactions, and e-commerce platforms. Handling this continuous stream of information is a challenge, but it offers an


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

SublimeText3 English version
Recommended: Win version, supports code prompts!

SublimeText3 Mac version
God-level code editing software (SublimeText3)