Home >Technology peripherals >AI >Hinton, Turing Award winner: I am old, I leave it to you to control AI that is smarter than humans
Do you still remember that the experts were divided into two camps on "whether AI may exterminate mankind"?
Because he does not understand why "AI will cause risks", Ng Enda recently started a dialogue series to talk to two Turing Award winners:
Does AI exist? What are the risks?
Interestingly, after having in-depth conversations with Yoshua Bengio and Geoffrey Hinton, he and they "reached a lot of consensus" "!
They both believe that both parties should jointly discuss the specific risks that artificial intelligence will create and clarify the extent of its understanding. Hinton also specifically mentioned Turing Award winner Yann LeCun as a "representative of the opposition"
The debate is still very fierce on this issue, and even respected scholars like Yann believe that large models do not really Understand what they are saying.
Musk was also very interested in this conversation:
In addition, Hinton was recently The Intellectual Property Conference once again "preached" about the risks of AI, saying that super intelligence that is smarter than humans will soon appear:
We are not used to thinking about things that are much smarter than us, and how to interact with them. They interact.
I don't see how to prevent superintelligence from "getting out of control" now, and I'm old. I hope that more young researchers will master the methods of controlling superintelligence.
Let’s take a look at the core points of these conversations and the opinions of different AI experts on this matter.
The first is a dialogue with Bengio. Ng and he reached a key consensus:
Scientists should try to identify “specific scenarios where AI risks exist.”
In other words, in which scenarios AI will cause major harm to human beings, or even lead to human extinction, this is a consensus that both parties need to reach.
Bengio believes that the future of AI is full of "fog and uncertainty", so it is necessary to find out some specific scenarios where AI will cause harm.
Then came the conversation with Hinton, and the two parties reached two key consensuses.
On the one hand, all scientists must have a good discussion on the issue of "AI risks" in order to formulate good policies;
On the other hand, AI is indeed understanding the world. Listing key technical issues on AI safety issues can help scientists reach a consensus.
In the process, Hinton mentioned the key point that needs to be reached, that is, "whether large dialogue models such as GPT-4 and Bard are really understand what they are saying":
Some people think they understand, some people think they are just random parrots.
I think we all believe that they understand (what they are talking about), but some scholars we respect very much, such as Yann, think that they do not understand.
Of course, LeCun, who was "called out", also arrived in time and expressed his views seriously:
We all agree that "everyone needs to reach a consensus on some issues." I also agree with Hinton that LLM has some understanding and saying they are "just statistics" is misleading.
1. But their understanding of the world is very superficial, largely because they are only trained with plain text. AI systems that learn how the world works from vision will have a deeper understanding of reality, whereas autoregressive LLM's reasoning and planning capabilities are very limited in comparison.
2. I don’t believe that AI close to human (or even cat) level will appear without the following conditions:
(1) World model learned from sensory input such as video
(2) An architecture that can reason and plan (not just autoregressive)
3. If we have architectures that understand planning, they will be goal-driven, that is, based on optimizing inference time (not just training time) Goals to plan work. These goals can be guardrails that make AI systems "obedient" and safe, or even ultimately create better models of the world than humans can.
The problem then becomes designing (or training) a good objective function that guarantees safety and efficiency.
4. This is a difficult engineering problem, but not as difficult as some people say.
Although this response still did not mention "AI risks", LeCun gave practical suggestions to improve AI safety (creating AI "guardrails"), and envisioned a better life than humans What does a more powerful AI “look like” (multi-sensory input capable of inferential planning).
To a certain extent, both parties have reached some consensus on the idea that AI has security issues.
Of course, it’s not just the conversation with Andrew Ng.
Hinton, who recently left Google, has talked about the topic of AI risks on many occasions, including the recent Intelligent Source Conference he attended.
At the conference, with the theme of "Two Routes to Intelligence", he discussed the two intelligence routes of "knowledge distillation" and "weight sharing", as well as how to make AI smarter, and My own views on the emergence of superintelligence.
To put it simply, Hinton not only believes that superintelligence (more intelligent than humans) will appear, but that it will appear sooner than people think.
Not only that, he believes that these super intelligences will get out of control, but currently he can’t think of any good way to stop them:
Super intelligence can easily gain more power by manipulating people . We are not used to thinking about things that are much smarter than us and how to interact with them. It becomes adept at deceiving people because it can learn examples of deceiving others from certain works of fiction.
Once it becomes good at deceiving people, it has a way of getting people to do anything... I find this horrifying but I don't see how to prevent this from happening because I'm old.
My hope is that young, talented researchers like you will figure out how we have these superintelligences and make our lives better.
When the "THE END" slide was shown, Hinton emphasized meaningfully:
##Reference link:This is my last PPT and the key to this speech. Finish.
[1]https://twitter.com/AndrewYNg/status/1667920020587020290
[ 2]https://twitter.com/AndrewYNg/status/1666582174257254402
[3]https://2023.baai.ac.cn/
The above is the detailed content of Hinton, Turing Award winner: I am old, I leave it to you to control AI that is smarter than humans. For more information, please follow other related articles on the PHP Chinese website!