Home > Article > Technology peripherals > Gary Marcus publicly shouted to Hinton and Musk: Deep learning is hitting a wall, I bet $100,000
“If someone says (deep learning) has hit a wall, all they have to do is make a list of things that deep learning can’t do. In 5 years, we’ll be able to prove that deep learning does it.”
On June 1st, the reclusive Geoffrey Hinton was a guest on UC Berkeley professor Pieter Abbeel’s podcast. The two had a 90-minute conversation, ranging from Masked auto-encoders, AlexNet to spiking neural networks, etc. wait.
In the show, Hinton clearly questioned the view that "deep learning has hit a wall."
The statement "Deep learning has hit a wall" comes from an article in March by the well-known AI scholar Gary Marcus. To be precise, he believes that "pure end-to-end deep learning" has almost come to an end, and the entire AI field must find a new way out.
Where is the way out? According to Gary Marcus, symbol processing has a great future. However, this view has never been taken seriously by the community. Hinton even said before: "Any investment in symbol processing methods is a huge mistake."
Hinton's public "rebuttal" in the podcast obviously caused a stir Attention Gary Marcus.
Just a dozen hours ago, Gary Marcus sent an open letter to Geoffrey Hinton on Twitter:
The letter said this : "I noticed that Geoffrey Hinton was looking for some challenging targets. I actually wrote such a list with the help of Ernie Davis, and last week I issued a $100,000 bet to Musk."
What’s going on with Musk here? The reason starts with a tweet at the end of May.
For a long time, people have understood AGI to be the kind of AI described in movies such as A Space Odyssey (HAL) and Iron Man (JARVIS). Unlike current AI, which is trained for a specific task, AGI, more like the human brain, can learn how to complete a task.
Most experts believe AGI will take decades to achieve, while some even believe the goal will never be achieved. In a survey of experts in the field, it was estimated that there would be a 50% chance of achieving AGI by 2099.
In contrast, Musk appeared more optimistic and even expressed publicly on Twitter: "2029 is a critical year. I will be surprised if we have not achieved AGI by then. Hope Mars The same goes for the people on the Internet."
Gary Marcus, who expressed his disapproval, quickly asked: "How much are you willing to bet?"
Although Musk did not reply to this question, Gary Marcus continued to say that he could set up a game in Long Bets for an amount of US$100,000.
In Gary Marcus’ view, Musk’s relevant views are not reliable: “For example, you said in 2015 that it would take two years to achieve fully autonomous cars. Since then, you have said almost every year They all say the same thing, but fully autonomous driving has not yet been realized."
He also wrote down five standards to test whether AGI can be realized in his blog, as a bet:
"Here's my advice if you (or anyone else) manage to do it in 2029 At least three, even if you win. Deal? How about one hundred thousand dollars?"
With the pursuit of more people, the amount of this bet has increased to $500,000. However, so far, Musk has not responded.
On June 6, Gary Marcus published an article in Scientific American, reiterating his point of view: AGI is not “near” In front of you.
To the average person, it seems like huge progress is being made in the field of artificial intelligence. Among media reports: OpenAI’s DALL-E 2 seems to be able to convert any text into an image, GPT-3 is omniscient, and DeepMind’s Gato system, released in May, performs well on every task… One senior DeepMind executive even boasted of starting the quest for artificial general intelligence (AGI), AI with the same level of intelligence as humans...
Don’t be fooled. Machines may one day be as smart as humans, maybe even smarter, but that's far from it today. There is still a lot of work to be done to create machines that truly understand and reason about the real world. What we really need now is less posturing and more basic research.
To be sure, AI is indeed making progress in some areas—synthetic images look increasingly realistic, speech recognition can work in noisy environments—but we are no closer to universal human-level AI There is still a long way to go, for example, artificial intelligence cannot yet understand the true meaning of articles and videos, nor can it handle unexpected obstacles and interruptions. We still face the challenge that AI has had for years – making AI reliable.
Taking Gato as an example, given the task of adding a title to an image of a pitcher throwing a baseball, the system returns three different answers: "A baseball player pitches on a baseball field", "A man throws a baseball to A pitcher throws a baseball on a baseball field" and "A baseball player bats and a catcher in a baseball game." The first answer is correct, while the other two seem to contain additional players not visible in the image. This suggests that the Gato system does not know what is actually in the image, but rather what is typical of roughly similar images. Any baseball fan can tell this is the pitcher who just threw the ball - while we'd expect a catcher and batter nearby, they're conspicuously absent from the image.
Similarly, DALL-E 2 will confuse these two positional relationships: "the red cube on top of the blue cube" and "the red cube on top of the red cube" Blue Cube". Similarly, the Imagen model released by Google in May could not distinguish between "astronaut riding a horse" and "astronaut riding a horse."
You may still find it a little funny when a system like DALL-E goes wrong, but there are some AI systems that can cause very serious problems if they go wrong. For example, a self-driving Tesla recently drove directly toward a worker holding a stop sign in the middle of the road, requiring the human driver to intervene before it could slow down. The self-driving system could recognize humans and stop signs on their own, but failed to slow down when encountering unusual combinations of the two.
So, unfortunately, AI systems remain unreliable and have trouble adapting quickly to new environments.
Gato performed well on all tasks reported by DeepMind, but rarely as well as other contemporary systems. GPT-3 often writes fluent prose, but it still has difficulty mastering basic arithmetic, and it has too little understanding of reality. It is easy to produce incredible sentences such as "Some experts believe that eating socks can help the brain change its state." .
The problem behind this is that the largest research teams in the field of artificial intelligence are no longer academic institutions, but large technology companies. Unlike universities, businesses have no incentive to compete fairly. Their new paper was released through the press without academic review, leading to media coverage and sidestepping peer review. The information we get is only what the company itself wants us to know.
In the software industry, there is a special word that represents this business strategy "demoware", which means that the design of the software is suitable for display, but it is not necessarily suitable for the real world.
And AI products marketed in this way either cannot be released smoothly, or they are a mess in reality.
Deep learning improves the ability of machines to identify data patterns, but it has three major flaws: the learned patterns are superficial rather than conceptual; the results produced are difficult to interpret; and it is difficult to generalize. As Harvard computer scientist Les Valiant pointed out: "The core challenge of the future is to unify the form of AI learning and reasoning."
Currently, companies are pursuing exceeding benchmarks rather than creating new ideas. Some technologies push for small improvements instead of stopping to think about more fundamental issues.
We need more people to ask basic questions such as "How to build a system that can learn and reason at the same time" instead of pursuing gorgeous product displays.
The debate about AGI is far from reaching the end, and other researchers are joining in. Researcher Scott Alexander pointed out in his blog that Gary Marcus is a legend, and what he has written in the past few years is more or less completely accurate, but it still has its value.
For example, Gary Marcus had previously criticized some problems of GPT-2. Eight months later, when GPT-3 was born, these problems were solved. But Gary Marcus did not show mercy to GPT-3, and even wrote an article: "OpenAI's language generator does not know what it is talking about."
Essentially speaking, one point of view is currently correct. : "Gary Marcus made fun of large language models as a gimmick, but then these models will get better and better, and if this trend continues, AGI will soon be realized."
The above is the detailed content of Gary Marcus publicly shouted to Hinton and Musk: Deep learning is hitting a wall, I bet $100,000. For more information, please follow other related articles on the PHP Chinese website!