Home > Article > Technology peripherals > Bengio, LeCun and others jointly released a NeuroAI white paper: The essence of intelligence is sensorimotor ability, AI faces the great challenge of the embodied Turing test
This article is reproduced from Lei Feng.com. If you need to reprint, please go to the official website of Lei Feng.com to apply for authorization.
Historically, neuroscience has been a key driver and source of inspiration for the development of artificial intelligence, particularly vision, reward-based learning, interaction with the physical world, and language, which humans and other animals are so good at. fields where artificial intelligence has made great progress with the help of neuroscience.
But in recent years, the research methods of artificial intelligence seem to be moving away from neuroscience. At the same time, artificial intelligence has continued to have difficulties on the road to catching up with human intelligence. Against this background, an artificial intelligence craze that returns to neuroscience is taking shape.
Recently, a white paper issued a declaration that "NeuroAI will catalyze the next generation artificial intelligence revolution."
This white paper titled "Toward Next-Generation Artificial Intelligence: Catalyzing the NeuroAI Revolution" brings together two Turing Award winners, Yoshua Bengio and Yann LeCun, and a A group of scientists who are committed to research on the combination of machine learning and neuroscience.
They call for: In order to accelerate the progress of artificial intelligence and realize its huge potential, we must commit to basic research on NeuroAI.
The white paper first proposes that The basic element of biological intelligence lies in the ability of animals to engage in sensorimotor interaction with the world.
Starting from this premise, they proposed the Embodied Turing Test (The Embodied Turing Test) as the ultimate challenge of NeuroAI. Its core lies in advanced sensorimotor Ability, specifically including characteristics such as interaction with the world, flexibility of animal behavior, and energy efficiency.
At the same time, the white paper also envisions a route to deal with the embodied Turing test, breaking down the embodied Turing test of the AI system from the perspective of evolutionary history into the progression from intermediate to low-level organisms. to the intelligence of more complex organisms.
The return of artificial intelligence to neuroscience is inevitable.
The seeds of the artificial intelligence revolution were sown decades ago in computational neuroscience, when neuroscientists McCulloch and Pitts first proposed a mathematical expression of the properties of neurons in 1943 , they try to understand how the brain computes.
The invention of von Neumann's "von Neumann computer architecture" actually originated from his earliest work in building "artificial brains". Inspiration was drawn from very limited knowledge of the brain.
The deep convolutional network that set off the latest wave of artificial intelligence is built on artificial neural network (ANN), which is directly derived from the cat's visual processing circuit. Get inspired by research.
Similarly, the development of reinforcement learning (RL) was directly inspired by the neural activity of animals during the learning process.
Decades later, artificial neural networks and reinforcement learning have become the mainstream technologies of artificial intelligence, so in the eyes of the public, the long-term goal of "general artificial intelligence" seems to have been It's within our grasp.
However, contrary to this optimism, many front-line artificial intelligence researchers believe that we still need to make new major breakthroughs before it is possible to build a machine that can complete all human tasks. Artificial systems, and not just humans, but even simpler animals like mice.
Current AI is far from reaching this goal:
AI can easily defeat any human in games such as chess and Go opponents, but is not robust enough and often encounters difficulties when faced with new things; A series of simple behaviors such as "arranging chess pieces and moving them in a game"; AI's sensorimotor abilities are not yet comparable to those of a four-year-old child, or even simpler animals; AI lacks the ability to interact with an unpredictable world , difficulty handling new situations, a basic ability that all animals acquire effortlessly.
Therefore, more and more AI researchers suspect that it will be difficult to solve the above problems if we continue along the current path.
Since our goal is to make AI have more natural intelligence, we are likely to neednew inspiration from natural intelligent systems.
Although convolutional artificial neural networks and reinforcement learning are inspired by neuroscience, most current research on machine learning is taking another path. The approach used is inspired by decades-old discoveries in neuroscience, such as neural networks based on the brain's attention mechanism.
Modern neuroscience is indeed still affecting AI, but the impact is still very small. This is a missed opportunity. Over the past few decades, we have accumulated a wealth of knowledge about the brain, which has allowed us to gain insights into the anatomical and functional structures that underpin natural intelligence.
It is against this background that these scientists issued a declaration in this white paper:
NeuroAI is The emerging field at the intersection of neuroscience and AI is based on the premise that a better understanding of neural computing will reveal the basic ingredients of intelligence, which will catalyze the next revolution in AI,eventually achieving capabilities that rival or even surpass humans. Ability artificial agents. They believe now is a good time to launch large-scale efforts to identify and understand the principles of biological intelligence and abstract them for use in computer and robotic systems.
So, what is the most important element of biological intelligence?
They believe that Adaptability, flexibility, and the ability to make general inferences from sparse observations are the basic elements of intelligence, and they have been developed in a certain way This form exists in our basic sensorimotor circuits that have evolved over hundreds of millions of years.
Although abstract thinking and reasoning are often considered intelligent behaviors unique to humans, as artificial intelligence pioneer Moravec said, abstract thinking is just "a new technique that may not have a history yet." 100,000 years... it works because it's supported by older, more powerful, but often unconscious, sensorimotor knowledge."
This is certainly good news, Rats, mice, and non-human primates serve as more tractable models for experiments on natural intelligence, and if artificial intelligence can match their perceptual and motor abilities, the steps toward human intelligence could be much smaller. Therefore, if we figure out the core capabilities that all animals possess in embodied sensorimotor interactions with the world, NeuroAI is bound to lead to significant advances.
In 1950, Alan Turing proposed the "Imitation Game" for Test the ability of a machine to exhibit intelligent behavior that is the same as, or indistinguishable from, humans. In that competition, human judges were required to evaluate natural language conversations between real people and machines trained to mimic human responses.
Turing proposed that compared to the unanswerable question of "whether machines can think", what we can be sure of is whether the conversational ability of machines can be distinguished from that of humans. The implicit view is that language represents the pinnacle of human intelligence, so machines that can talk must be intelligent.
In a way, Turing was right, but in another way he was wrong.
Although no AI can pass the Turing test, recently, language systems trained on large text libraries have achieved persuasive conversations. This success is partly It also reveals our tendency to attribute intelligence, agency, and even consciousness to our interlocutors. But at the same time, these systems still perform poorly on certain reasoning tasks, highlighting the fact that Turing ignored that intelligence goes far beyond language ability.
Many of the mistakes currently made by natural language processing (NLP) systems also illustrate AI’s fundamental lack of semantics, causal reasoning, and common sense. For these models, the meaning of words lies in their statistical co-occurrence rather than its real-world basis, so even the most advanced language models, despite their increasing capabilities, are still limited in some areas. Still poor performance in basic physics knowledge.
The Turing Test as originally formulated did not explore the ability of AI to share with animals and understand the physical world in a flexible way. It only established a simple qualitative standard by which to judge the progress we have made in building AI. . The understanding and abilities may be based on human perception and motor abilities, honed through countless generations of natural selection.
In this regard, the author proposed an expanded "Embodied Turing Test" (The Embodied Turing Test) in the white paper, which includes advanced sensorimotor abilities and can integrate AI Benchmark and compare interactions with humans and other animals.
Take animals as an example. Each animal has its own unique set of abilities, so they also define their own embodied Turing test, such as testing the ability of artificial beavers to build dams. , a squirrel’s ability to jump from a tree, and more. Among them, many core sensorimotor abilities are shared by almost all animals, and animals' ability to rapidly evolve the sensorimotor skills needed to adapt to new environments also indicates that these core skills provide a solid foundation for them.
The following are several common characteristics of sensorimotor abilities introduced in the white paper.
Moving around with purpose and interacting with the environment are the defining characteristics of animals.
Despite recent advances in robotics in areas such as optimized control, reinforcement learning, and imitation learning, it is still far from reaching animal-level control of the body and manipulation of objects.
The authors point out that because neuroscience can provide guidance on modular and layered architectures, when these architectures are adapted into AI, AI can also have these capabilities.
Not only that, neuroscience also provides us with some principle guidance for designing AI systems, such as partial autonomy (how low-level modules in the hierarchy can function without input from high-level modules) act semi-autonomously) and phased control (how movements originally generated by slow planning processes are eventually transferred to fast reflex systems), etc.
Understanding how specific neural networks are involved in different tasks—such as locomotion, fine control of limbs, hands, and fingers, perception, and action selection—may provide insights into how such systems function. Implementation in robots provides paths and may also provide solutions for other forms of "intelligence" in more cognitive fields. For example, incorporating circuit principles for low-level motion control can help provide a better foundation for high-level motion planning in AI.
Another goal of understanding specific neural networks is to develop the ability to engage, engage in ways that resonate with the range of behaviors produced by individual animals. Artificial intelligence systems for a large number of flexible and diverse tasks.
Today, AI can easily learn to outperform humans in video games like Human Torch, simply using on-screen pixels and game scores. Unlike human players, however, these AIs are brittle and very sensitive to small perturbations, and changing the rules of the game or inputting a few pixels can result in catastrophically poor performance. This is because the AI learns a mapping from pixels to actions that doesn’t involve an understanding of the agents, objects, and physics that govern them in the game.
Similarly, a self-driving car itself has no idea of the danger of a box falling from the truck in front of it unless it actually sees it falling from the truck. Cases where boxes lead to bad outcomes. Even if it is trained on the dangers of falling crates, the system may consider an empty plastic bag blown from the car in front of it to be an obstacle to be avoided at all costs, because it does not actually understand that the plastic bag is What, or how physically threatening it is. This inability to handle scenarios not seen in the training data is a major challenge for widely relied upon AI systems.
To succeed in an unpredictable and ever-changing world, agents must be flexible and able to grasp new changes through the regular development trends of the situation, This is what animals do too. Because animals are grounded in real-world interactions, over the course of evolution and development they are born with most of the skills they need to thrive, or acquire them quickly from limited experience.
So it's clear that training for a specific task from scratch is not how animals acquire skills. Animals do not enter a blank slate world and then rely on large labeled training sets to learn. Although machine learning has been looking for ways to avoid this "blank slate", including self-supervised learning, transfer learning, continuous learning, meta-learning, one-shot learning and imitation learning, these methods have not come close to the flexibility found in animals.
To this end, the authors believe that understanding the neural circuit-level principles that underlie behavioral flexibility in the real world, even in simple animals, has the potential to greatly improve the performance of AI. Flexibility and practicality. That is, we can take advantage of the optimization processes that evolution has already engaged in, dramatically accelerating the search for universal circuits for real-world interactions.
Currently, an important challenge facing AI that our brains have overcome is energy efficiency. For example, training a large language model such as GPT‑3 requires over 1,000 megawatt hours, enough to power a small town for a day. The total amount of energy used to train AI is large and growing rapidly. In comparison, biological systems are more energy efficient. The human brain, for example, uses about 20 watts.
The difference in ability requirements between brains and computers stems from differences in information processing. At the algorithmic level, modern large-scale artificial neural networks such as large-scale language models rely on large feed-forward architectures, and their self-focus on the sequence of processes over time often ignores the potential power of recursion for processing continuous information.
Currently, since we do not have an effective credit allocation calculation mechanism in recurrent networks, the brain uses a flexible loop architecture to handle long-term sequences, which can obviously solve time credit allocation efficiently. problem—even more efficiently than the feedforward credit allocation mechanisms currently used in artificial neural networks. If we canuse the brain to guide how to design efficient training mechanisms for recurrent circuits,may be able to improve our ability to process sequential data while further improving the energy efficiency of the system.
Secondly, at the implementation level, biological neurons interact primarily by transmitting action potentials (spikes), which is an asynchronous communication protocol. Just like the interaction between traditional digital elements, the output of a neuron can be viewed as a string of 0s and 1s, but unlike a digital computer, the energy consumption of a "1" (i.e., a peak) is several times higher than that of a "0" orders of magnitude. Because biological circuits operate in a spike-sparse state—even very active neurons rarely exceed a 10% duty cycle and most operate at a lower rate—they are much more energy efficient.
In addition, other factors may also contribute to improving the energy efficiency of biological networks. For example, biological networks can still compute efficiently even if some components are very unreliable or "noisy."
Synaptic release - the way neurons communicate - may be so unreliable that only 1 in 10 messages are delivered. The circuits are organized in such a way that spike trains are highly variable, a property that may enable neural circuits to perform probabilistic reasoning.
This is a form of robust computation under uncertainty. Although many studies are currently working to exploit the potential of peak networks, to date, there has not yet been a solution that can be compared with biological circuits. A “killer application” with comparable energy efficiency. The main problem currently is that "neuromorphic chips" neither replicate innate neural circuit functions nor are easily trainable, so although they are more energy-efficient, they are not as useful as their energy-hungry digital counterparts.
Under such circumstances, the author proposes that in order to achieve higher energy efficiency in AI, we can not only draw on the idea of sparse spike networks, but also provide neural circuit functions and learning Regular neuromorphic chips are implemented.
So, how should we develop AI that meets the embodied Turing test?
The author believes that it may be possible to proceed step by step from the perspective of evolutionary history. For example, most animals engage in goal-directed movements, such as moving toward food and away from threats. Building on this are more complex skills, including combining different senses, like sight or smell, distinguishing food and threats through different sensory information, navigating to previous locations, weighing incentives and threats to achieve goals, and interacting with the world in accurate ways. Interact to serve goals and more.
These complex abilities can be found in simple organisms like worms, but in more complex animals like fish and mammals, these abilities are engineered to be combined with new strategies to achieve greater power behavioral strategies. This evolutionary perspective suggests a strategy for solving the embodied Turing test by breaking it down into a series of interdependent incremental challenges and iteratively optimizing this series.
In addition, organisms that represent low- and medium-level challenges include worms, flies, fish, rodents, and primates, all of which are systems widely used in neuroscience research. , we can use previous accumulated knowledge about the circuits and mechanisms behind these animal behavioral patterns to conduct related research on computers using virtual environments and virtual creatures.
To achieve the required level of flexibility in behavior, AI that passes the Embodied Turing Test will face a series of species-specific tests to explore self-supervised learning, continuous learning, transfer learning, Meta-learning and lifelong memory, etc. These tests can also be standardized so that we can measure research progress. Ultimately, successful virtual organisms can adapt to the physical world through robotic efforts and be used to solve real-world problems.
To achieve the above-mentioned goals, we need not only a lot of resources, but also achievements in disciplines other than traditional artificial intelligence and neuroscience such as psychology, engineering, and linguistics. Beyond simply leveraging existing expertise in these disciplines, our top priority is to train a new generation of AI researchers who excel in both engineering/computational science and neuroscience.
These researchers will draw on decades of neuroscience to chart new directions for artificial intelligence research. The biggest challenge will be determining how to leverage the synergies of neuroscience, computational science, and other related fields to advance exploration. That is, determining which details of brain circuitry, biophysics, and chemistry are important and which details can be used in AI applications. neglect.
Therefore, we urgently need researchers with some training in different fields who can abstract neuroscientific knowledge in a computer-friendly way and help design experiments to produce results relevant to artificial intelligence. New neurobiological research results.
Secondly, we need to create a shared platform that can develop and test these virtual agents. One of the biggest technical challenges we will face in creating iterations, embodying the Turing test, and evolving artificial organisms to address this need is computing power. Currently, training a large neural network model just for a single specific task (such as controlling the body in 3D space) can take days on specialized distributed hardware.
Third, we need to support basic theoretical and experimental research on neural computing.
We've learned a lot about the brain over the past few decades, and we're starting to understand more and more about the brain's individual cells, the neurons, and how these things work Part of a simple circuit that works. Armed with knowledge of these modules, our next step is to devote our efforts to exploring how the brain, an integrated intelligent system, operates.
To explore this whole, you need to deeply understand how 100 billion neurons of 1,000 different types are connected together, and you need to understand how each neuron interacts with thousands of other The flexible and adaptable connections between neurons also require an understanding of computing power, which is intelligence. So we must reverse engineer the brain and abstract the basic principles of its operation.
Please note that the development of virtual agents will greatly accelerate this process, as virtual agents allow direct comparisons between experiments with real animals and computer simulated animals , which will reveal the underlying mechanisms of neural circuit-level properties and mechanisms necessary for robust control, flexible behavior, energy efficiency, and intelligent behavior.
Harnessing the powerful synergies between neuroscience and artificial intelligence requires program and infrastructure support to organize and enable large-scale research across disciplines.
Although neuroscience has a long history of promoting the development of artificial intelligence, and its future development also has huge potential, artificial intelligence Most engineers and computational scientists in the intelligence community have no idea that neuroscience can be leveraged.
The influence of neuroscience on the thinking of von Neumann, Turing and other giants of computing theory is rarely mentioned in typical computer science courses; cutting-edge artificial intelligence conferences such as NeurIPS were once used to share Showcase the latest results in computational neuroscience and machine learning, but people attending the conference now almost only focus on machine learning and ignore neuroscience.
# "Engineers don't study birds to build better airplanes" is a common saying. But the analogy fails, in part because aviation pioneers did study birds, and scholars still do in the modern era. Furthermore, this analogy does not hold true on a more fundamental level: the goal of modern aeronautical engineering is not to achieve "bird-level" flight, but the primary goal of artificial intelligence is indeed to achieve, or exceed, "human-level" intelligence.
Just as computers surpass humans in many ways (such as the ability to calculate prime numbers), airplanes surpass birds in speed, range, and cargo-carrying capacity. If the goal of aeronautical engineers is indeed to build a machine with "bird-level" capabilities that can fly through dense forests and land gently on tree branches, then these engineers will have to pay close attention to how birds do it. Do that.
Similarly, if the goal of artificial intelligence is to achieve animal-level common sense sensorimotor intelligence, researchers would be better off learning from animals, which have evolved in this unpredictable world. behavior.
The above is the detailed content of Bengio, LeCun and others jointly released a NeuroAI white paper: The essence of intelligence is sensorimotor ability, AI faces the great challenge of the embodied Turing test. For more information, please follow other related articles on the PHP Chinese website!