Home >Technology peripherals >AI >LeCun predicts AGI: Large models and reinforcement learning are both rampant! My 'world model' is the new way
Yann LeCun, one of the most famous contemporary giants in the AI industry and the soul of Meta's AI laboratory, has long been committed to giving machines a basic understanding of the world's operating concepts, that is, allowing AI to acquire common sense. What LeCun did in the past was to use video excerpts to train neural networks and let the AI predict, pixel by pixel, what will appear in the next frame of daily activity videos. Unsurprisingly, he admitted that this approach hit a brick wall. After thinking about it for several months to a year and a half, LeCun had new ideas for the next generation of AI.
In an interview with "MIT Technology Review", LeCun outlined his new research path, saying This will give the machine a common sense basis for exploring the world. For LeCun, this is the first step in building AGI (Artificial General Intelligence). Machines that can think like humans have been the guiding vision since the birth of the AI industry, and it is also one of the most controversial concepts.
But LeCun’s new path may still be incomplete, raising more questions than answers. The biggest question is that LeCun himself admits that he doesn’t yet know how to build the kind of AI he describes. At the core of this approach is a neural network that can look at and learn from the real world in a different way than before. LeCun finally gave up on letting AI guess the next frame of video pixel by pixel, and only let the new neural network learn the key knowledge necessary to complete the task.
LeCun then plans to pair this neural network with another neural network called a "configurator." The "configurator" is responsible for deciding which details the main neural network must learn and automatically adjust the main system accordingly. For LeCun, AGI is an integral part of human interaction with future technologies. Of course, this outlook coincides with his employer Meta Company, which has invested all his wealth in developing the metaverse.
LeCun said that in 10-15 years, AR glasses will replace the current status of smartphones. AR glasses must have a virtual intelligent assistant that can assist human daily activities. If these assistants are to be most effective, they must more or less keep up with the intelligence of the human brain.
LeCun's recent passion for "world model", according to him, is the basic operating mode of most animal brains: for the real world Run a simulation. From infancy, animals use prediction-trial-and-error methods to develop intelligence. Young children develop the foundations of intelligence in the first few months of life by observing real-world movements and setbacks.
Observing a small ball falling hundreds of times, ordinary babies have a basic understanding of the existence and operation of gravity even if they have never taken a basic physics class or learned Newton's three laws. Therefore, this kind of intuitive/tacit reasoning is called "common sense" by ordinary people. Human beings use common sense to understand most possible futures and impossible fantasies in the real world, to foresee the consequences of their actions and make decisions accordingly. Such human intelligence requires neither pixel-accurate details nor a comprehensive library of physical parameters. Even if someone has no vision or is illiterate, they can still use their intelligence normally.
But it is difficult to teach a machine to learn common sense. Today's neural networks need to be shown thousands of examples before they begin to vaguely discover underlying patterns. LeCun said that the basis of intelligence is the common sense ability to predict the immediate future. However, after giving up on letting AI predict pixel by pixel, LeCun said he wanted to change his mind. LeCun gave an analogy: Imagine you hold a pen in the air and let it go. Common sense tells you that the pen will definitely fall, but the precise location of the fall is beyond the scope of human intelligence prediction. According to the past AI development model, AI has to run complex physics models to predict whether the pen will fall and to obtain the precise location of the fall.
Now LeCun is trying hard to let AI only predict the common sense conclusion that the pen will fall. As for the precise position, it is not within the scope of solution. LeCun said this is the basic pattern of the "world model".
LeCun said that he has built an early version of the "world model" that can complete basic object recognition, and is now working on training it to learn the above-mentioned common sense predictions.
However, LeCun said that he has not yet understood the function of the "configurator". The "configurator" AI in LeCun's imagination is the control component of the entire AGI system. It will determine what common-sense predictions the World Model needs to make at any moment, and adjust the details of the data the World Model should handle to do so. LeCun now firmly believes that a "configurator" is essential, but he doesn't know how to train a neural network to achieve this effect.
"We need to explore a list of feasible technologies, and this list does not exist yet." In LeCun's vision, "configurator" and "world model" are the future AGI The two core parts of the basic cognitive architecture are based on which the cognitive model for perceiving the world, the incentive model that drives AI to adjust behavior, etc. can be developed. LeCun said that this way the neural network can successfully simulate every part of the human brain. For example, the "configurator" and "world model" play the role of the prefrontal lobe, the motivation model is the amygdala of AI, and so on.
Cognitive architecture and prediction models at different levels of detail are all views that have been established in the industry for many years. However, when deep learning becomes the mainstream of the AI industry, many of these old ideas become outdated. Now LeCun is returning to traditional wisdom: "The AI research community has forgotten these things a lot."
The reason why we go back to the old ways road because LeCun firmly believes that the current mainstream path in the industry has reached a dead end. Regarding how to build AGI, there are currently two mainstream views in the AI industry.
First, many researchers firmly believe in the path that leads to their own mistakes: just like OpenAI’s GPT series and DALL-E series, the bigger the model, the better, so large that it exceeds the critical point. , AI has awakened human intelligence.
The second is reinforcement learning: continuous trial and error, and rewarding and punishing the AI according to the trial and error results. This is DeepMind’s method for making various chess and card AI and game AI. Believers of this path believe that as long as the reward incentives are set correctly, reinforcement learning will eventually create a real AGI.
Lecun said that the two types of people here are rubbish: "Infinitely expanding the magnitude of existing large language models, and finally being able to create human-level AI? This absurd argument, I didn't believe it for a second. These models can only process various text and image data, without any direct experience of the real world." "Reinforcement learning requires a huge amount of data to train the model to perform the simplest tasks. I I don’t think this method has a chance of making AGI.”
People in the industry have both support and opposition to LeCun’s views. If LeCun's vision is realized, AI will become the next generation of basic high-performance technology no less than the Internet. But his announcement did not include the performance, incentive mechanism, control mechanism, etc. of his own model. However, these shortcomings are minor matters, because regardless of praise or criticism, industry insiders agree that facing these shortcomings will be a long time coming. Because even LeCun can't make AGI right now.
Lecun himself also acknowledged this situation. He said that he only hoped to sow seeds for new theoretical paths and let latecomers build results on this basis. "Achieving this goal requires too many people and too much effort. I am bringing this up now just because I think this path is the final right path." Even if this is not possible, LeCun hopes to persuade his colleagues not to just focus on it. With large models and reinforcement learning, it’s best to open your mind. "I hate to see people wasting time."
Yoshua Bengio, another leader in the AI industry and a good friend of LeCun, expressed his joy Seeing old friends come true. "Yann has been talking about this for a while, but I'm quite happy to see him comprehensively summarizing all his remarks in one place. However, these are just applications for research directions rather than report of results. We usually only discuss them privately. Sharing this below, the risk of talking publicly is quite high."
## David Silver, who leads the development of the game AI AlphaZero at DeepMind, disapproves of LeCun's comments on his project Criticism, but welcome to his vision.
"The world model described by LeCun is indeed an exciting new idea." Melanie Mitchell of the Santa Fe Institute in California agreed with LeCun: "The industry really doesn't see this kind of thing in the deep learning community very often. point of view. But the big language model really lacks both memory and the backbone of the internal world model that can play a role."
Natasha Jaques of Google Brain disagrees: "Everyone has seen that big language The model is extremely efficient and incorporates a lot of human knowledge. Without a language model, how can I upgrade the world model proposed by LeCun? Even if humans learn, the way is not only personal experience, but also word of mouth."
The above is the detailed content of LeCun predicts AGI: Large models and reinforcement learning are both rampant! My 'world model' is the new way. For more information, please follow other related articles on the PHP Chinese website!