


The risk of AI getting out of control increases: Open model weight triggers Meta protest
Editors: Du Wei, Xiaozhou
AI, especially open source and closed source in the era of large models, has pros and cons. The important thing is how to do a good job in the process of using it.
People have always been divided on the choice between open source and closed source in the field of AI. However, in the era of large models, the powerful force of open source has quietly emerged. According to a previously leaked internal document from Google, around open source models such as Meta’s LLaMA, the entire community is rapidly building models similar to OpenAI and Google’s large model capabilities.
There is no doubt that Meta is the absolute core of the open source world, with continued open source efforts such as the recent release of Llama 2. However, anything good will be destroyed by the wind. Recently, Meta has been in "trouble" due to open source.
Outside Meta’s offices in San Francisco, a group of protesters holding signs gathered to protest Meta’s strategy of publicly releasing AI models, claiming that these released models caused the “irreversible proliferation” of potentially unsafe technologies. Some protesters even compared the large models released by Meta to "weapons of mass destruction"
These protesters call themselves "concerned citizens" and are led by Holly Elmore. According to LinkedIn, she is an independent advocate for the AI Pause movement.
The content that needs to be rewritten is: Picture source: MISHA GUREVICH
If a model proves to be unsafe, she noted that the API can be shut down. For example, companies like Google and OpenAI only allow users to access large models through APIs
In contrast, Meta's LLaMA series of open source models make model weights available to the public, allowing anyone with the right hardware and expertise to copy and tweak the model themselves. Once model weights are released, the publishing company no longer has control over how the AI is used
In Holly Elmore's view, releasing model weights is a dangerous strategy. Anyone can modify the model, and these models cannot be undone. "The more powerful the model, the more dangerous this strategy is."
Compared to open source models, large models accessed through APIs often have various security features, such as response filtering or specific training to prevent the output of dangerous or annoying responses
If the model weights are released, it becomes much easier to retrain the model to jump over these "guardrails". This makes it more possible to use these open source models to create phishing software and conduct network attacks.
The content that needs to be rewritten is: Picture source: MISHA GUREVICH
She believes that part of the problem with model security is that the current security measures taken are not enough, so it is necessary to find a better way to ensure the security of the model
Currently, Meta has not made any comment on this. However, Meta’s chief AI scientist Yann LeCun seemed to respond to the statement that “open source AI must be outlawed” and showed the flourishing open source AI startup community in Paris
There are many people who hold different opinions and believe that an open strategy for AI development is the only way to ensure trust in technology, which is different from Holly Elmore’s view
Some netizens said that open source has pros and cons. It can allow people to gain greater transparency and enhance innovation, but it will also face the risk of abuse (such as code) by malicious actors.
As expected, OpenAI was ridiculed again, with some saying "it should return to open source."
There are many people who are worried about open source
Peter S. Park, a postdoctoral fellow in artificial intelligence security at MIT, said that the widespread release of advanced artificial intelligence models in the future may cause many problems because it is basically impossible to completely prevent the abuse of artificial intelligence models
However, Stella Biderman, executive director of the nonprofit artificial intelligence research organization EleutherAI, said: "So far, there is little evidence that the open source model has caused any specific harm. It is unclear whether just placing a model behind the API This can solve the security problem."
Biderman believes: "The basic elements of building an LLM have been disclosed in free research papers, and anyone can read these papers to develop their own models."
She further pointed out: "If companies encourage the confidentiality of model details, it may have serious adverse consequences for the transparency of field research, public awareness and scientific development, especially for independent researchers."
Although everyone is already discussing the impact of open source, it is still unclear whether Meta's method is really open enough and whether it can take advantage of open source.
Stefano Maffulli, executive director of the Open Source Initiative (OSI), said: "The concept of open source artificial intelligence has not been clearly defined. Different organizations use the term to refer to different things, indicating varying degrees of 'publicly available' stuff', which might confuse people."
Maffulli points out that for open source software, the key issue is whether the source code is publicly available and can be used for any purpose. However, reproducing an AI model may require sharing training data, data collection methods, training software, model weights, inference code, and more. Among them, the most important issue is that the training data may involve privacy and copyright issues
OSI has been working on a precise definition of “open source AI” since last year and is likely to release an early draft in the coming weeks. But no matter what, he believes that open source is crucial to the development of AI. "If AI is not open source, we cannot have trustworthy, responsible AI," he said
In the future, the differences between open source and closed source will continue, but open source is unstoppable.
The above is the detailed content of The risk of AI getting out of control increases: Open model weight triggers Meta protest. For more information, please follow other related articles on the PHP Chinese website!

The legal tech revolution is gaining momentum, pushing legal professionals to actively embrace AI solutions. Passive resistance is no longer a viable option for those aiming to stay competitive. Why is Technology Adoption Crucial? Legal professional

Many assume interactions with AI are anonymous, a stark contrast to human communication. However, AI actively profiles users during every chat. Every prompt, every word, is analyzed and categorized. Let's explore this critical aspect of the AI revo

A successful artificial intelligence strategy cannot be separated from strong corporate culture support. As Peter Drucker said, business operations depend on people, and so does the success of artificial intelligence. For organizations that actively embrace artificial intelligence, building a corporate culture that adapts to AI is crucial, and it even determines the success or failure of AI strategies. West Monroe recently released a practical guide to building a thriving AI-friendly corporate culture, and here are some key points: 1. Clarify the success model of AI: First of all, we must have a clear vision of how AI can empower business. An ideal AI operation culture can achieve a natural integration of work processes between humans and AI systems. AI is good at certain tasks, while humans are good at creativity and judgment

Meta upgrades AI assistant application, and the era of wearable AI is coming! The app, designed to compete with ChatGPT, offers standard AI features such as text, voice interaction, image generation and web search, but has now added geolocation capabilities for the first time. This means that Meta AI knows where you are and what you are viewing when answering your question. It uses your interests, location, profile and activity information to provide the latest situational information that was not possible before. The app also supports real-time translation, which completely changed the AI experience on Ray-Ban glasses and greatly improved its usefulness. The imposition of tariffs on foreign films is a naked exercise of power over the media and culture. If implemented, this will accelerate toward AI and virtual production

Artificial intelligence is revolutionizing the field of cybercrime, which forces us to learn new defensive skills. Cyber criminals are increasingly using powerful artificial intelligence technologies such as deep forgery and intelligent cyberattacks to fraud and destruction at an unprecedented scale. It is reported that 87% of global businesses have been targeted for AI cybercrime over the past year. So, how can we avoid becoming victims of this wave of smart crimes? Let’s explore how to identify risks and take protective measures at the individual and organizational level. How cybercriminals use artificial intelligence As technology advances, criminals are constantly looking for new ways to attack individuals, businesses and governments. The widespread use of artificial intelligence may be the latest aspect, but its potential harm is unprecedented. In particular, artificial intelligence

The intricate relationship between artificial intelligence (AI) and human intelligence (NI) is best understood as a feedback loop. Humans create AI, training it on data generated by human activity to enhance or replicate human capabilities. This AI

Anthropic's recent statement, highlighting the lack of understanding surrounding cutting-edge AI models, has sparked a heated debate among experts. Is this opacity a genuine technological crisis, or simply a temporary hurdle on the path to more soph

India is a diverse country with a rich tapestry of languages, making seamless communication across regions a persistent challenge. However, Sarvam’s Bulbul-V2 is helping to bridge this gap with its advanced text-to-speech (TTS) t


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

WebStorm Mac version
Useful JavaScript development tools

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

SublimeText3 Linux new version
SublimeText3 Linux latest version

Zend Studio 13.0.1
Powerful PHP integrated development environment
