Home  >  Article  >  Technology peripherals  >  Hot debate in Silicon Valley: Will AI destroy humanity?

Hot debate in Silicon Valley: Will AI destroy humanity?

王林
王林forward
2023-05-30 23:18:001302browse

Hot debate in Silicon Valley: Will AI destroy humanity?

News on May 22, as new technologies such as generative artificial intelligence have become a new craze in the technology world, the debate over whether artificial intelligence will destroy mankind has intensified. A prominent tech leader has warned that artificial intelligence could take over the world. Other researchers and executives say the claims are science fiction.

At a U.S. Congressional hearing last week, Sam Altman, CEO of artificial intelligence startup OpenAI, clearly reminded everyone that the technology disclosed by the company has security risks. .

Altman warned that artificial intelligence technologies such as the ChatGPT chatbot could lead to problems such as disinformation and malicious manipulation, and called for regulation.

He said that artificial intelligence could "cause serious harm to the world."

Altman’s testimony to Congress comes as the debate over whether artificial intelligence will dominate the world is turning mainstream, with growing divisions across Silicon Valley and those working to promote the technology. big.

Some people once believed that the intelligence level of machines may suddenly surpass humans and decide to destroy humans. Now this fringe idea is gaining support from more and more people. Some leading scientists even believe that the time it will take for computers to surpass humans and control them will be shortened.

But many researchers and engineers say that although many people are worried about the emergence of killer artificial intelligence like Skynet in the movie "Terminator", this worry is not based on logical good reasons. science. Instead it distracts from the real problems this technology already causes, including the ones Altman described in his testimony. Today’s AI technology is obfuscating copyright, exacerbating concerns about digital privacy and surveillance, and could be used to improve hackers’ ability to breach network defenses.

Google, Microsoft and OpenAI have all publicly released breakthrough artificial intelligence technology. These technologies can carry out complex conversations with users and generate images based on simple text prompts. The debate over evil artificial intelligence has heated up.

“This is not science fiction,” said Geoffrey Hinton, the godfather of artificial intelligence and a former Google employee. Hinton said artificial intelligence smarter than humans could emerge within five to 20 years, compared with his previous estimate of 30 to 100 years.

"It's as if aliens have landed on Earth or are about to," he said. "We really can't accept it because they speak fluently, they're useful, they write poetry and they answer boring letters. But they're really aliens."

Still, in big tech Within the company, many engineers closely involved with the technology don’t think AI replacing humans is something we need to worry about right now.

Sara Hooker, director of Cohere for AI, a research lab owned by artificial intelligence startup Cohere and a former Google researcher, said: "Among researchers actively engaged in this discipline, paying attention to the current real-world risks There are far more people than those who are concerned about whether there are risks to human survival."

There are many real-life risks at present, such as robots trained to publish harmful content will deepen prejudice and discrimination; the vast majority of artificial intelligence The training data is all in English, mainly from North America or Europe, which may make the Internet even more deviated from the language and culture of the majority of people; these bots also often fabricate false information and disguise it as fact; in some cases, they It can even get into an infinite loop of conversations attacking the user. In addition, people are not clear about the ripple effects of this technology. All industries are preparing for the disruption or change that artificial intelligence may bring. Even high-paying jobs such as lawyers or doctors will be replaced.

Some people also believe that artificial intelligence may harm humans in the future, or even control the entire society in some way. While the existential risks to humanity appear to be more severe, many believe they are harder to quantify and less tangible.

"There is a group of people who think these are just algorithms. They are just repeating what they see online." Google CEO Sundar Pichai said in an interview in April this year: "There is also a view that these algorithms are emerging with new properties, creativity, reasoning and planning capabilities." "We need to treat this matter carefully."

This debate stems from the continuous breakthroughs in machine learning technology in the field of computer science over the past 10 years. Machine learning creates software and technology that can extract novel insights from large amounts of data without explicit instructions from humans. This technology is ubiquitous in applications ranging from social media algorithms to search engines to image recognition programs.

Last year, OpenAI and several other small companies began releasing tools that use a new machine learning technology: generative artificial intelligence. After training itself on trillions of photos and sentences scraped from the web, so-called large language models can generate images and text based on simple prompts, carry out complex conversations with users, and write computer code.

Anthony Aguirre, executive director of the Future of Life Institute, said big companies are racing to develop increasingly smart machines with little oversight. The Future of Life Institute was established in 2014 to study risks to society. With funding from Tesla CEO Elon Musk, the institute began studying the possibility of artificial intelligence destroying humanity in 2015. If artificial intelligence gains better reasoning capabilities than humans, they will try to achieve self-control, Aguirre said. This is something people should worry about, just like a real problem that exists now.

He said: "How to restrain them from deviating from the track will become more and more complicated." "Many science fiction novels have already made it very specific."

In March of this year, Ah Gire helped write an open letter calling for a six-month moratorium on training new artificial intelligence models. This open letter received a total of 27,000 signatures in support of Yoshua Bengio, a senior artificial intelligence researcher who won the highest award in computer science in 2018, and Emma, ​​the CEO of one of the most influential artificial intelligence start-ups. Emad Mostaque was among them.

Musk is undoubtedly the most eye-catching among them. He helped create OpenAI and is now busy building an AI company of his own, recently investing in the expensive computer equipment needed to train AI models.

For many years, Musk has believed that humans should be more careful about the consequences of developing super artificial intelligence. In an interview during Tesla's annual shareholder meeting last week, Musk said he initially funded OpenAI because he felt Google co-founder Larry Page was "cavalier" about the threat of artificial intelligence. .

The American version of Zhihu Quora is also developing its own artificial intelligence model. Company CEO Adam D’Angelo did not sign the open letter. When talking about the open letter, he said, "People have different motivations when making this proposal."

OpenAI CEO Altman also does not approve of the content of the open letter. He said that he agreed with parts of the open letter, but that the overall lack of "technical details" was not the right way to regulate artificial intelligence. Altman said at last Tuesday's hearing on artificial intelligence that his company's approach is to get AI tools out to the public early to identify and solve problems before the technology becomes more powerful.

But there is a growing debate in the technology world about killer robots. Some of the harshest criticism comes from researchers who have been studying the technology's flaws for years.

In 2020, Google researchers Timnit Gebru and Margaret Mitchell collaborated with University of Washington scholar Emily M. Bender. Emily M. Bender and Angelina McMillan-Major co-authored a paper. They argue that the increasing ability of large language models to imitate humans heightens the risk that people will think they are sentient.

Instead, they argue that these models should be understood as "random parroting," or simply very good at predicting which word will come next in a sentence based purely on probability, without the need to understand what they are saying. Other critics call large language models "autocompletion" or "knowledge enemas."

They documented in detail how large language models can scriptably generate sexist and other harmful content. Gebru said the paper was suppressed by Google. After she insisted on publishing the article publicly, Google fired her. A few months later, the company fired Mitchell again.

Four co-authors of this paper also wrote a letter in response to the open letter signed by Musk and others.

“It’s dangerous to distract us with a fantasy AI utopia or apocalypse,” they said. “Instead, we should focus on the very real, very real exploitative practices of development companies that are rapidly Concentrate efforts on exacerbating social inequality.”

Google declined to comment on Gebru’s firing at the time, but said there were still many researchers working on responsible and ethical artificial intelligence.

"There is no doubt that modern artificial intelligence is powerful, but that does not mean that they pose an imminent threat to human survival," said Hooker, director of artificial intelligence research at Cohere.

Currently, much of the discussion about artificial intelligence breaking away from human control focuses on how it can quickly overcome its own limitations, like Skynet in "The Terminator."

Hook said: "Most technologies and the risks that exist in technology evolve over time." "Most risks are exacerbated by the technology limitations that currently exist."

Last year, Google fired by artificial intelligence researcher Blake Lemoine. He once stated in an interview that he firmly believes that Google's LaMDA artificial intelligence model has sentient capabilities. At the time, Lemon was roundly rebuked by many in the industry. But a year later, many people in the technology community began to accept his views.

Hinton, a former Google researcher, said that it was only recently that he changed his views on the potential dangers of this technology after using the latest artificial intelligence models. Hinton asked the computer program complex questions that, in his view, required the AI ​​model to roughly understand his request rather than just predict possible answers based on training data.

In March of this year, Microsoft researchers said that while studying OpenAI’s latest model GPT4, they observed a “spark of general artificial intelligence,” which refers to artificial intelligence that can think independently like humans.

Microsoft has spent billions of dollars working with OpenAI to develop the Bing chatbot. Skeptics believe that Microsoft is building its public image around artificial intelligence technology. This technology is always thought to be more advanced than it actually is, and Microsoft has a lot to gain from it.

Microsoft researchers believe in the paper that this technology has developed a spatial and visual understanding of the world based solely on the text content it was trained on. GPT4 can automatically draw a unicorn and describe how to stack random objects, including eggs, on top of each other so that the eggs don't break.

Microsoft research team wrote: "In addition to mastering language, GPT-4 can also solve a variety of complex new problems involving mathematics, programming, vision, medicine, law, psychology and other fields, and does not require Any special tips." They concluded that AI's capabilities are comparable to humans in many areas.

But one of the researchers admitted that although artificial intelligence researchers have tried to develop quantitative standards to evaluate the intelligence of machines, how to define "intelligence" is still very tricky.

He said, "They are all problematic or controversial."

The above is the detailed content of Hot debate in Silicon Valley: Will AI destroy humanity?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete