Home  >  Article  >  Technology peripherals  >  The tech giant's tens of billions of dollars in research and development of AI were questioned and boasted: the AI ​​that once defeated humans will be sold

The tech giant's tens of billions of dollars in research and development of AI were questioned and boasted: the AI ​​that once defeated humans will be sold

王林
王林forward
2023-05-13 12:58:061268browse

Over the years, technology giants such as Google and Facebook have invested billions of dollars in research and development of artificial intelligence (AI), hyping its potential. But now, researchers say it's time to reset expectations for AI.

It is true that AI technology development has made certain leaps in recent times. Companies have developed more AI systems that can produce conversations, poems and images that look like humans. However, AI ethicists and researchers warn that some companies are exaggerating AI's capabilities, and that this hype is creating widespread misunderstandings and distorting policymakers' views of AI's power and unreliability.

“We lost our balance.” Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, a nonprofit research organization in Seattle, said. This imbalance, Etzioni and other researchers say, helps explain why so many people are swayed by Google engineers' claims that AI is sentient.

Google engineer Blake Lemoine argued that one of the company's AI systems should be considered sentient based on his religious beliefs. He claimed that the AI ​​chatbot has effectively become a person, with the right to decide whether to allow experiments on it. Google subsequently suspended him and dismissed his claims. Google said company ethicists and technology experts had studied the possibility and dismissed his claims.

The tech giants tens of billions of dollars in research and development of AI were questioned and boasted: the AI ​​that once defeated humans will be sold

Lemoyne claims AI is sentient

Researchers say the perception in the broader scientific community that AI is becoming conscious or could be conscious Still on the margins.

From a practical point of view, the series of technologies covered by AI are still helpful to a large extent for a series of mundane back-end logistics tasks, such as processing data from users in order to better deliver to users. Advertising, content, and product recommendations. Over the past decade, companies such as Google, Facebook parent company Meta, and Amazon have invested heavily in improving such capabilities to drive their growth and profit engines. Google, for example, uses AI to better parse complex search prompts, helping it deliver relevant ads and web results.

Some startups have even greater ambitions. One of the companies, OpenAI, which has raised billions of dollars from donors and investors including Tesla CEO Elon Musk and Microsoft Corp., aims to achieve so-called general AI, which is the ability to Systems that match or exceed every dimension of human intelligence. Some researchers believe this is decades, if not impossible, away.

The competition between these companies to outdo each other has fueled the rapid development of AI and spawned an increasing number of high-profile demonstrations. These demonstrations captured the public's imagination and drew attention to the technology.

OpenAI’s DALL-E system, which generates artwork based on user prompts such as “McDonald’s in orbit of Saturn” or “bear wearing athletic gear participating in a triathlon,” has sparked outrage on social media in recent weeks. Lots of memes. Subsequently, Google followed the example of the DALL-E system and launched its own text-based artwork generation system.

As impressive as these results are, a growing number of experts are warning that companies are not properly controlling publicity.

Margaret Mitchell, who co-led Google's ethical AI team, was fired after writing a critical paper about Google's systems. She said one of Google's selling points to shareholders is that it is the best in the world at AI.

The Limits of AI

Mitchell currently works at an AI startup called Hugging Face, where she and Timnit Gebb, another co-lead of AI ethics at Google, Timnit Gebru was one of the first to warn of the dangers of AI. Gebru was also fired from Google.

According to the last paper they wrote while working at Google, they argue that these technologies can sometimes cause harm because their human-like abilities mean they are just as likely to fail as humans. For example, Facebook’s AI system mistranslated the Arabic word “good morning” as “hurt them” when translated into English and “attack them” when translated into Hebrew, leading to an arrest by Israeli police The Palestinian man who posted the greeting later realized the mistake.

Internal Facebook documents exposed last year also showed that Facebook’s AI system was unable to consistently identify first-person shooting videos and racist remarks, and only deleted a small portion of content that violated company regulations. Facebook said improvements in AI technology have significantly reduced hate speech and other content that violates its rules.

The gap between ideal and reality

The ideal is very plump, but the reality is very skinny. Etzioni and others pointed to IBM's marketing around Watson. Watson is an AI system developed by IBM that became famous for beating humans on the quiz show "Jeopardy!" However, after 10 years and billions of dollars of investment, IBM said last year that it was exploring the possibility of selling its Watson Health unit, whose main product is supposed to help doctors diagnose and treat cancer.

Because AI is now ubiquitous and involves more companies, the software they develop includes email, search engines, news feeds, and voice assistants, and has penetrated into our digital lives, so its The risks are even greater.

Rejecting claims that AI is sentient, Google spokesman Brian Gabriel said the company's chatbots and other conversational tools "can improvise on any unreal topic." “If you ask an ice cream dinosaur what it looked like, they can generate text about things like melting and roaring, which is different from perception,” he says.

AI cognitive gap seeps into government policy

Elizabeth Kumar, a computer science doctoral student at Brown University who studies AI policy, said this cognitive gap has quietly seeped into policy. in the file.

Recently, local, federal, and international regulations and regulatory proposals have attempted to address the potential for AI systems to cause harm through discrimination, manipulation, or other means, all based on the assumption that AI systems are highly capable. Kumar said they largely ignored the possibility that the AI ​​system "simply didn't work" to cause harm, and that it was more likely.

Etzioni is also a member of the Biden administration’s National Artificial Intelligence Research Resources Task Force. He noted that policymakers often struggle to grasp these issues. “I can tell you from my conversations with some of them that they have good intentions and ask good questions, but they don’t know everything,” he said.

The above is the detailed content of The tech giant's tens of billions of dollars in research and development of AI were questioned and boasted: the AI ​​that once defeated humans will be sold. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete