


Hinton, Turing Award winner: I am old, I leave it to you to control AI that is smarter than humans
Do you still remember that the experts were divided into two camps on "whether AI may exterminate mankind"?
Because he does not understand why "AI will cause risks", Ng Enda recently started a dialogue series to talk to two Turing Award winners:
Does AI exist? What are the risks?
Interestingly, after having in-depth conversations with Yoshua Bengio and Geoffrey Hinton, he and they "reached a lot of consensus" "!
They both believe that both parties should jointly discuss the specific risks that artificial intelligence will create and clarify the extent of its understanding. Hinton also specifically mentioned Turing Award winner Yann LeCun as a "representative of the opposition"
The debate is still very fierce on this issue, and even respected scholars like Yann believe that large models do not really Understand what they are saying.
Musk was also very interested in this conversation:
In addition, Hinton was recently The Intellectual Property Conference once again "preached" about the risks of AI, saying that super intelligence that is smarter than humans will soon appear:
We are not used to thinking about things that are much smarter than us, and how to interact with them. They interact.
I don't see how to prevent superintelligence from "getting out of control" now, and I'm old. I hope that more young researchers will master the methods of controlling superintelligence.
Let’s take a look at the core points of these conversations and the opinions of different AI experts on this matter.
Ng Enda Dialogue with Turing Award Winner: AI Security Should Reach a Consensus
The first is a dialogue with Bengio. Ng and he reached a key consensus:
Scientists should try to identify “specific scenarios where AI risks exist.”
In other words, in which scenarios AI will cause major harm to human beings, or even lead to human extinction, this is a consensus that both parties need to reach.
Bengio believes that the future of AI is full of "fog and uncertainty", so it is necessary to find out some specific scenarios where AI will cause harm.
Then came the conversation with Hinton, and the two parties reached two key consensuses.
On the one hand, all scientists must have a good discussion on the issue of "AI risks" in order to formulate good policies;
On the other hand, AI is indeed understanding the world. Listing key technical issues on AI safety issues can help scientists reach a consensus.
In the process, Hinton mentioned the key point that needs to be reached, that is, "whether large dialogue models such as GPT-4 and Bard are really understand what they are saying":
Some people think they understand, some people think they are just random parrots.
I think we all believe that they understand (what they are talking about), but some scholars we respect very much, such as Yann, think that they do not understand.
Of course, LeCun, who was "called out", also arrived in time and expressed his views seriously:
We all agree that "everyone needs to reach a consensus on some issues." I also agree with Hinton that LLM has some understanding and saying they are "just statistics" is misleading.
1. But their understanding of the world is very superficial, largely because they are only trained with plain text. AI systems that learn how the world works from vision will have a deeper understanding of reality, whereas autoregressive LLM's reasoning and planning capabilities are very limited in comparison.
2. I don’t believe that AI close to human (or even cat) level will appear without the following conditions:
(1) World model learned from sensory input such as video
(2) An architecture that can reason and plan (not just autoregressive)
3. If we have architectures that understand planning, they will be goal-driven, that is, based on optimizing inference time (not just training time) Goals to plan work. These goals can be guardrails that make AI systems "obedient" and safe, or even ultimately create better models of the world than humans can.
The problem then becomes designing (or training) a good objective function that guarantees safety and efficiency.
4. This is a difficult engineering problem, but not as difficult as some people say.
Although this response still did not mention "AI risks", LeCun gave practical suggestions to improve AI safety (creating AI "guardrails"), and envisioned a better life than humans What does a more powerful AI “look like” (multi-sensory input capable of inferential planning).
To a certain extent, both parties have reached some consensus on the idea that AI has security issues.
Hinton: Superintelligence is closer than imagined
Of course, it’s not just the conversation with Andrew Ng.
Hinton, who recently left Google, has talked about the topic of AI risks on many occasions, including the recent Intelligent Source Conference he attended.
At the conference, with the theme of "Two Routes to Intelligence", he discussed the two intelligence routes of "knowledge distillation" and "weight sharing", as well as how to make AI smarter, and My own views on the emergence of superintelligence.
To put it simply, Hinton not only believes that superintelligence (more intelligent than humans) will appear, but that it will appear sooner than people think.
Not only that, he believes that these super intelligences will get out of control, but currently he can’t think of any good way to stop them:
Super intelligence can easily gain more power by manipulating people . We are not used to thinking about things that are much smarter than us and how to interact with them. It becomes adept at deceiving people because it can learn examples of deceiving others from certain works of fiction.
Once it becomes good at deceiving people, it has a way of getting people to do anything... I find this horrifying but I don't see how to prevent this from happening because I'm old.
My hope is that young, talented researchers like you will figure out how we have these superintelligences and make our lives better.
When the "THE END" slide was shown, Hinton emphasized meaningfully:
This is my last PPT and the key to this speech. Finish.
[1]https://twitter.com/AndrewYNg/status/1667920020587020290
[ 2]https://twitter.com/AndrewYNg/status/1666582174257254402
[3]https://2023.baai.ac.cn/
The above is the detailed content of Hinton, Turing Award winner: I am old, I leave it to you to control AI that is smarter than humans. For more information, please follow other related articles on the PHP Chinese website!

Running large language models at home with ease: LM Studio User Guide In recent years, advances in software and hardware have made it possible to run large language models (LLMs) on personal computers. LM Studio is an excellent tool to make this process easy and convenient. This article will dive into how to run LLM locally using LM Studio, covering key steps, potential challenges, and the benefits of having LLM locally. Whether you are a tech enthusiast or are curious about the latest AI technologies, this guide will provide valuable insights and practical tips. Let's get started! Overview Understand the basic requirements for running LLM locally. Set up LM Studi on your computer

Guy Peri is McCormick’s Chief Information and Digital Officer. Though only seven months into his role, Peri is rapidly advancing a comprehensive transformation of the company’s digital capabilities. His career-long focus on data and analytics informs

Introduction Artificial intelligence (AI) is evolving to understand not just words, but also emotions, responding with a human touch. This sophisticated interaction is crucial in the rapidly advancing field of AI and natural language processing. Th

Introduction In today's data-centric world, leveraging advanced AI technologies is crucial for businesses seeking a competitive edge and enhanced efficiency. A range of powerful tools empowers data scientists, analysts, and developers to build, depl

This week's AI landscape exploded with groundbreaking releases from industry giants like OpenAI, Mistral AI, NVIDIA, DeepSeek, and Hugging Face. These new models promise increased power, affordability, and accessibility, fueled by advancements in tr

But the company’s Android app, which offers not only search capabilities but also acts as an AI assistant, is riddled with a host of security issues that could expose its users to data theft, account takeovers and impersonation attacks from malicious

You can look at what’s happening in conferences and at trade shows. You can ask engineers what they’re doing, or consult with a CEO. Everywhere you look, things are changing at breakneck speed. Engineers, and Non-Engineers What’s the difference be

Simulate Rocket Launches with RocketPy: A Comprehensive Guide This article guides you through simulating high-power rocket launches using RocketPy, a powerful Python library. We'll cover everything from defining rocket components to analyzing simula


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 Chinese version
Chinese version, very easy to use

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SublimeText3 Mac version
God-level code editing software (SublimeText3)