Home >Technology peripherals >AI >US media: Musk and others are right to call for a suspension of AI training and need to slow down for safety

US media: Musk and others are right to call for a suspension of AI training and need to slow down for safety

王林
王林forward
2023-04-13 09:16:021456browse

US media: Musk and others are right to call for a suspension of AI training and need to slow down for safety

According to news on March 30, Tesla CEO Elon Musk and Apple co-founder Steve Steve Wozniak and more than 1,000 others recently signed an open letter calling for a moratorium on training AI systems more powerful than GPT-4. BI, a mainstream American online media, believes that for the benefit of the whole society, AI development needs to slow down.

In the open letter, Wozniak, Musk and others requested that as AI technology becomes increasingly powerful, safety guardrails be set up and the training of more advanced AI models be suspended. They believe that for powerful AI models like OpenAI's GPT-4, "they should only be developed when we are confident that their impact is positive and the risks are controllable."

Of course, this is not the first time people have called for safety guardrails for AI. However, as AI becomes more complex and advanced, calls for caution are rising.

James Grimmelmann, a professor of digital and information law at Cornell University in the United States, said: “Slowing down the development of new AI models is a very good idea, because if AI ends up being If it is beneficial to us, then there is no harm in waiting a few months or years, we will reach the end anyway. And if it is detrimental, then we also buy ourselves extra time to work out the best way to respond and understand How to fight against it.”

The rise of ChatGPT highlights the potential dangers of moving too fast

Last November, when OpenAI’s chatbot ChatGPT was launched for public testing, it caused a huge sensation. Understandably, people started promoting ChatGPT's capabilities, and its destructive effect on society quickly became apparent. ChatGPT began passing medical professional exams, giving instructions on how to make bombs, and even created an alter ego for himself.

The more we use AI, especially so-called generative artificial intelligence (AIGC) tools like ChatGPT or the text-to-image conversion tool Stable Diffusion, the more we see its shortcomings and its ways of creating bias. potential, and how powerless we humans appear to be in harnessing its power.

BI editor Hasan Chowdhury wrote that AI has the potential to “become a turbocharger, accelerating the spread of our mistakes.” Like social media, it taps into the best and worst of humanity. But unlike social media, AI will be more integrated into people's lives.

ChatGPT and other similar AI products already tend to distort information and make mistakes, something Wozniak has spoken about publicly. It's prone to so-called "hallucinations" (untruthful information), and even OpenAI CEO Sam Altman admitted that the company's models can produce racial, sexist and biased answers . Stable Diffusion has also run into copyright issues and been accused of stealing inspiration from the work of digital artists.

As AI becomes integrated into more everyday technologies, we may introduce more misinformation into the world on a larger scale. Even tasks that seem benign to an AI, such as helping plan a vacation, may not yield completely trustworthy results.

It’s difficult to develop AI technology responsibly when the free market demands rapid development

To be clear, AI is an incredibly transformative technology, especially one like ChatGPT Such AIGC. There's nothing inherently wrong with developing machines to do most of the tedious work that people hate.

While the technology has created an existential crisis among the workforce, it has also been hailed as an equalizing tool for the tech industry. There is also no evidence that ChatGPT is preparing to lead a bot insurgency in the coming years.

Many AI companies have ethicists involved to develop this technology responsibly. But if rushing a product outweighs its social impact, teams focused on creating AI safely won’t be able to get the job done in peace.

Speed ​​seems to be a factor that cannot be ignored in this AI craze. OpenAI believes that if the company moves fast enough, it can fend off the competition and become a leader in the AIGC space. That prompted Microsoft, Google and just about every other company to follow suit.

Releasing powerful AI models for public experience before they are ready does not make the technology better. The best use cases for AI have yet to be found because developers have to cut through the noise generated by the technology they create, and users are distracted by the noise.

Not Everyone Wants to Slow Down

The open letter from Musk and others has also been criticized by others, who believe that it misses the point.

Emily M. Bender, a professor at the University of Washington, said on Twitter that Musk and other technology leaders only focus on the power of AI in the hype cycle, rather than the actual damage it can cause.

Cornell University digital and information law professor Gerry Melman added that tech leaders who signed the open letter were "belatedly arriving" and opening a Pandora's box that could bring consequences to themselves. Come to trouble. He said: "Now that they have signed this letter, they can't turn around and apply the same policy to other technologies such as self-driving cars."

Suspending development or imposing more regulations may also Will not achieve results. But now, the conversation seems to have turned. AI has been around for decades, maybe we can wait a few more years.

The above is the detailed content of US media: Musk and others are right to call for a suspension of AI training and need to slow down for safety. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete