Home >Technology peripherals >AI >Musk and others' open letter calling for the suspension of AI research was questioned and accused of exacerbating hype and distorting papers

Musk and others' open letter calling for the suspension of AI research was questioned and accused of exacerbating hype and distorting papers

王林
王林forward
2023-04-17 08:07:021547browse

Musk and others' open letter calling for the suspension of AI research was questioned and accused of exacerbating hype and distorting papers

According to news on March 31, Tesla CEO Elon Musk and Apple co-founder Steve Waugh Steve Wozniak and thousands of other AI researchers recently signed an open letter calling for a moratorium on research into more advanced AI technologies. However, this letter was questioned by many experts and even signatories, and was accused of exacerbating AI hype, forging signatures, and distorting papers.

The open letter was written by the Future of Life Institute, a nonprofit whose mission is to “reduce the global catastrophic and existential risks posed by powerful technologies.” Specifically, the institute focuses on mitigating long-term "existential" risks to humanity, such as superintelligent AI. Musk is a supporter of the organization, donating $10 million to it in 2015.

The letter reads: "More powerful AI systems should be developed only when we are convinced that the impact of AI is positive and the risks are controllable. Therefore, we call on all AI laboratories to immediately suspend At least 6 months to train AI systems more powerful than GPT-4. AI labs and independent experts should use this time to work together to develop and implement a set of shared security protocols for advanced AI design and development by independent external experts Strict auditing and supervision."

The letter also clarified: "This does not mean that AI development is on hold overall, but simply a retreat from dangerous races to larger, unpredictable ones with emergent capabilities. Black box model." It refers to the AI ​​race between big tech companies like Microsoft and Google, which have released many new AI products in the past year.

Other notable signatories include Emad Mostaque, CEO of image generator startup Stability AI, and author and historian Yuval Noah Harari. Yuval Noah Harari) and Pinterest co-founder Evan Sharp, among others. There were also signatures from employees of companies involved in the AI ​​competition, including Google sister company DeepMind and Microsoft. Although OpenAI developed and commercialized the GPT series of AI models, no one signed the letter.

Despite going through the verification process, the letter initially had many false signers, including impersonating OpenAI CEO Sam Altman, Meta chief AI scientist Yann Le Yann LeCun. The Future of Life Institute has since cleaned up the list and suspended showing more signatures on the letter while it verifies each signature.

However, the release of the open letter has caused an uproar and has been scrutinized by many AI researchers, including many of the signatories themselves. Some signers have recanted their stance, some high-profile signatures have been proven to be fake, and more AI researchers and experts have publicly expressed their objections to the letter’s description and proposed approach.

Gary Marcus, professor of psychology and neuroscience at New York University, said: "The letter is not perfect, but the spirit is correct." Meanwhile, Stability AI CEO Mu Si Tucker said on Twitter that OpenAI is a truly "open" AI company, "So, I don't think suspending training for six months is the best idea, and I don't agree with many of the points in the letter, but the letter There are parts of it that are really interesting."

AI experts criticized the letter for furthering "AI hype" but failing to list or call for concrete action on the dangers of AI that exist today. Some believe this promotes a long-standing but somewhat unrealistic view that has been criticized as harmful and anti-democratic because it favors the super-rich and allows them to take morally dubious actions on certain grounds. the behavior of.

Emily M. Bender, a professor in the Department of Linguistics at the University of Washington and co-author of the paper cited at the beginning of the open letter, wrote on Twitter that the letter was "riddled with AI hype." , and misused her research results. The letter stated: "Extensive research shows that AI systems with human-like intelligence may pose a significant threat to society and humanity." But Bender countered that she specifically pointed out in the study that this threat refers to the current large-scale Language models and their use in systems of oppression, which is much more concrete and urgent than the future AI threats posited in the open letter.

Bender continued: "We published a full paper in late 2020 pointing out that this rush to build larger language models without considering the risks is problematic. But the risks and harms have never been too much for AI. powerful', rather they are about the concentration of power in the hands of the people, about reproducing systems of oppression, about the destruction of information ecosystems, about the destruction of natural ecosystems, etc."

Sasha Luccioni, a research scientist at AI startup Hugging Face, said in an interview: "The open letter is essentially misleading: It draws everyone's attention to the fact that large language models are hypothesized. forces and harms, and proposing very vague, almost ineffective solutions, rather than focusing on these harms and addressing them here and now. For example, demanding more transparency when it comes to LLM’s training data and capabilities, or asking for legislation to provide for Where and when they can be used." Arvind Narayanan, associate professor of computer science at Princeton University, said the open letter is full of AI hype and "makes solutions real and happening. AI harm becomes more difficult.”

The open letter raises several questions: "Should we automate all jobs, including those that are fulfilling? Should we cultivate non-human minds that may eventually surpass human intelligence and replace it?" Should we continue to develop AI at the risk of losing control of civilization?"

In this regard, Narayanan said that these questions are "nonsense" and "absolutely ridiculous." Whether computers will replace humans and take over human civilization is a very distant question, part of the long-term thinking that distracts us from current problems. After all, AI is already being integrated into people’s jobs, reducing the need for certain professions, rather than being a form of “non-human thinking” that will make us “obsolete.”

Narayanan also said: "I think these can be regarded as legitimate long-term concerns, but they have been mentioned again and again, diverting attention from the current harm, including the very real information security and Security risks! And addressing these security risks will require our cooperation. Unfortunately, the hype in this letter, including exaggeration of AI capabilities and risks to human survival, may lead to more constraints on AI models, Making it more difficult to deal with risks."

However, many of the signatories of the open letter also defended themselves. Yoshua Bengio, founder and scientific director of research organization Mila, said the six-month moratorium is necessary for governance bodies, including governments, to understand, audit and validate AI systems to ensure they are effective. The public is safe. He added that there are dangerous concentrations of power, AI tools have the potential to destabilize democracies, and "there is a conflict between democratic values ​​and the way these tools are developed."

Max Tegmark, professor of physics at MIT’s NSF AI and Fundamental Interaction Institute (IAIFI) and director of the Future of Life Institute, said that the worst-case scenario is that humans will gradually Losing control of civilization. The risk now, he said, is that "we lose control to a group of unelected powerful people in technology companies who have too much influence." Fears of losing control of civility were hinted at, but no concrete measures were talked about other than calling for a six-month moratorium.

Timnit Gebru, a computer scientist and founder of the Distributed AI Institute, posted on Twitter that it was ironic that they were calling for a moratorium on training more than GPT-4. A powerful AI model, but it fails to address a host of concerns surrounding GPT-4 itself.

The above is the detailed content of Musk and others' open letter calling for the suspension of AI research was questioned and accused of exacerbating hype and distorting papers. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete