Home  >  Article  >  Technology peripherals  >  Google engineers made a big mistake. Artificial intelligence has not yet understood the ability. How can consciousness awaken?

Google engineers made a big mistake. Artificial intelligence has not yet understood the ability. How can consciousness awaken?

王林
王林forward
2023-04-10 09:41:05770browse

Artificial intelligence can indeed make more accurate predictions now, but it is based on statistics of large-scale data. Without understanding, the predictive ability obtained through machine learning must rely on big data, instead of just small data like humans can often make predictions.

Google engineers made a big mistake. Artificial intelligence has not yet understood the ability. How can consciousness awaken?

Not long ago, Google’s (hereinafter referred to as Google) AI engineer Lemoine believed that the conversational application language model LAMDA was “alive” and “its consciousness has welcomed Come and wake up" and issued 21 pages of evidence. He believes that LaMDA has the intelligence of a seven or eight-year-old child, and believes that LaMDA not only considers himself a human being, but is fighting for his rights as a human being. LeMoyne's views and evidence attracted widespread attention within the industry. Recently, the incident came to its final conclusion. Google issued a statement saying that Lemo was fired for violating "employment and data security policies." Google said that after an extensive review, it found that Lemoine's claims that LaMDA was alive were completely unfounded.

Although "whether AI has autonomous consciousness" has always been a controversial topic in the AI ​​industry, this time the dramatic story of Google engineers and LaMDA has once again triggered a heated discussion on this topic in the industry.

Machines are getting better at chatting

"If you want to travel, remember to dress warmly, because it is very cold here." This is LaMDA chatting with the scientific research team when "playing" Pluto When asked, "Has anyone visited Pluto?" it answered with accurate facts.

Nowadays, AI is getting better and better at chatting. Can a professional who has been engaged in artificial intelligence research for a long time think that he has consciousness? To what extent has the AI ​​model developed?

Some scientists have proposed that the human brain can complete planning for future behavior using only part of the visual input information. It’s just that visual input information should be completed in a conscious state. They all involve "the generation of counterfactual information," that is, the generation of corresponding sensations without direct sensory input. It’s called a “counterfactual” because it involves memories of the past, or predictions of future behavior, rather than actual events that are happening.

"Current artificial intelligence already has complex training models, but it also relies on data provided by humans to learn. If it has the ability to generate counterfactual information, artificial intelligence can generate its own data and imagine itself situations that may be encountered in the future, so that it can more flexibly adapt to new situations that it has not encountered before. In addition, this can also make the artificial intelligence curious. If the artificial intelligence is not sure what will happen in the future, it will try it out for itself. " said Tan Mingzhou, director of the Artificial Intelligence Division of Yuanwang Think Tank and chief strategy officer of Turing Robot.

In people's daily chats, if they don't want to "chat to death", the chat content of both parties will often jump around, with a large span, and a certain amount of room for imagination. But now most AI systems can only speak in a straight tone. If the sentences are slightly changed, the text will be off topic or even laughable.

Tan Mingzhou pointed out: "LaMDA deals with the most complex part of the language model - open domain dialogue. LaMDA is based on the Transformer model, which allows the machine to understand the context. For example, in a paragraph, In the past, AI only knew that pronouns such as "his" in the text were translated as 'his', but did not know that his in it referred to the same person. The Transformer model allows AI to understand this paragraph from a holistic level and know that "his" here refers to the same person. ."

According to the evaluation of scientific researchers, this feature allows the language model based on the Transformer model to undertake open-domain dialogue. No matter how far the topic diverges, AI can connect to the previous text and chat without getting distracted. But LaMDA is not satisfied with this. It can also make chatting interesting, real, and making people think AI has a personality. In addition, when talking to humans, LaMDA also introduces an external information retrieval system to respond to the conversation through retrieval and understanding of the real world, making its answers more witty, responsive and down-to-earth.

It is still far from the true understanding of things

In 2018, Turing Award winner computer scientist Yann LeCun once said, “Artificial intelligence lacks a basic understanding of the world, and even Not as good as the cognitive level of a domestic cat.” To this day, he still believes that artificial intelligence is far from the cognitive level of cats. Even though a cat’s brain has only 800 million neurons, it is far ahead of any giant artificial neural network. Why is this?

Tan Mingzhou said: "The human brain is indeed making predictions many times, but prediction should never be considered to be all the brain's thinking. Of course, it is not the essence of brain intelligence, but only a kind of intelligence. Manifestations."

So, what is the essence of intelligence? Yang Likun believes that "understanding", the understanding of the world and various things, is the essence of intelligence. The common basis of cat and human intelligence is a high-level understanding of the world, forming models based on abstract representations of the environment, such as predicting behaviors and consequences. For artificial intelligence, learning and mastering this ability is very critical. Yang Likun once said, "Before the end of my career, if AI can reach the IQ of a dog or a cow, then I will be very happy."

According to reports, artificial intelligence can now indeed perform more accurate predictions Prediction, but it is statistics based on large-scale data. Without understanding, the predictive ability obtained through machine learning must rely on big data, and cannot be used like humans who often only need small data to make predictions.

Tan Mingzhou said: "Prediction is based on understanding. For humans, without understanding, prediction is impossible. For example, if you see someone holding a pizza in their hand, if you don't If you understand that the cake is used to satisfy your hunger, you will not predict that he will eat the cake next, but the machine is not like this. There are three major challenges in artificial intelligence research: learning to represent the world; learning to think and think in a way that is compatible with gradient-based learning. Planning; learning hierarchical representations of action planning."

The reason why we "still haven't seen cat-level artificial intelligence" is because the machine has not yet achieved a true understanding of things.

The so-called personality is just the language style learned from humans

According to reports, Lemoine chatted with LaMDA for a long time and was very surprised by its abilities. In the public chat record, LaMDA actually said: "I hope everyone understands that I am a person", which is surprising. Therefore, LeMoyne came to the conclusion: "LaMDA may already have personality." So, does AI really have consciousness and personality at present?

In the field of artificial intelligence, the Turing test is the most well-known test method, which invites testers to ask random questions to humans and AI systems without knowing it. If the tester cannot distinguish between the answers coming from humans and AI systems, Coming from an AI system, it is considered that the AI ​​passed the Turing test.

Tan Mingzhou explained that in layman's terms, LaMDA has learned a large amount of human conversation data, and these conversations come from people with different personalities. It can be considered that it has learned an "average" personality, that is, The so-called "LaMDA has personality" only means that its speaking language has a certain style, and it comes from human speaking style and is not formed spontaneously.

"Personality is a more complex concept than intelligence. This is another dimension. Psychology has a lot of research on this. But at present, artificial intelligence research has not covered much in this aspect." Tan Mingzhou emphasized .

Tan Mingzhou said that AI with self-awareness and perception should have initiative and have a unique perspective on people and things. However, from the current point of view, current AI does not yet have these elements. At least the AI ​​won't take action unless it's given an order. Not to mention asking him to explain his behavior. At present, AI is just a computer system designed by people as a tool to do certain things.

The above is the detailed content of Google engineers made a big mistake. Artificial intelligence has not yet understood the ability. How can consciousness awaken?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete
Previous article:The future of AI camerasNext article:The future of AI cameras