Home >Technology peripherals >AI >Make AI think like a baby! DeepMind 'Plato” model published in Nature sub-journal
Paper address: https://www.nature.com/articles/s41562-022-01394-8
However, before talking about this project, we need to Let me give an example first to make it easier for everyone to understand.
If I stand in front of you with a pen, and then I hide the pen behind my back, will you not be able to see the pen?
But the pen must still exist, right?
Such a simple truth, not only you understand it, but even a two-month-old baby understands it.
But the reason behind it is very intriguing. Scientists are curious, why do people naturally understand this principle?
The story of DeepMind starts from this simple curiosity.
We call "the pen cannot be seen after being put down but the pen is still there" as one of the thousands of common sense in physics, and DeepMind scientists want to compare common sense in physics between AI and babies.
Luis Piloto of Princeton University and his colleagues developed a deep learning AI system that can understand some common-sense laws of the physical world.
In this way, future computer models can better imitate human thinking and solve problems with a model that has the same cognition as a baby.
In general, any AI model starts with a blank piece of paper, and then uses a variety of examples to train the model. From the input data and examples, the model generates knowledge.
However, scientists point out that this is not the case for babies.
Babies do not learn things from scratch, but they are born with some prejudgments about objective things.
Let’s take the hidden pen above as an example. Babies innately know that even if the pen is hidden, it will still be there.
This is the underlying logic of the next experiment. That is: babies have some core assumptions when they are born, and these assumptions will allow them to develop in the right direction as they grow up, and their knowledge will become better and better as time goes by and experience increases. Come more refined.
This gave the Piloto team inspiration.
Piloto wondered, would a deep learning artificial intelligence model that imitates infant behavior patterns perform better than an artificial intelligence model that starts with a blank sheet of paper and relies solely on experience learning?
The researchers further compared the two different models.
The first thing they did was the traditional method (referred to as a blank sheet of paper). They gave the AI model some visual animations of objects and let the AI learn, such as a block sliding down a slope, or a ball bouncing against a wall.
The AI model detected movement patterns in these animations, and the researchers then set out to test whether the model could predict the movement of other objects.
On the other hand, the artificial intelligence model that imitates babies has some "principles" at the beginning, and the source of these "principles" is some innate assumptions of babies about the movement and interaction between objects.
To give a simple example, babies know that two objects cannot pass through each other, an object cannot rise out of thin air, etc.
In fact, the common sense of physics that infants know innately goes beyond the two points mentioned above. The full version is the following five points:
1. Continuity: Objects do not travel from one place to another, but have a certain continuous path in time and space.
2. Object persistence: Objects will not disappear when out of sight.
3. Solidity: Objects do not penetrate each other.
4. Immutability: The properties of the object (such as shape) will not change.
5. Directional inertia: The path of object motion is consistent with the principle of inertia.
Based on these five points of knowledge, if you perform a magic trick or something for babies, and then something goes against their preset knowledge, they will know that you are doing it. They are clever and know that counter-intuitive phenomena are not the way things are.
Although, babies are still not as knowledgeable as older children. Babies will observe counter-common sense phenomena for a long time, and then compare them with their own preset cognitions, and finally conclude that someone is doing tricks. in conclusion.
Speaking of this, I can’t help but think of a very popular video. My parents hid behind the sheets, shook the sheets up and down a few times, and quickly hid in the room behind them while hiding behind the sheets. When the baby doesn't see his parents after the sheets disappear, he will stand there and think for a while, wondering where his parents have gone.
There is another interesting point here. That is, babies will express "surprise" after seeing counterintuitive phenomena. This sounds obvious, but the researchers also replicated this unique performance in AI.
With these foundations in mind, let’s look at the experimental results.
The AI model designed by Piloto is called PLATO (Physics Learning through Auto-encoding and Tracking Objects), which is "Plato".
PLATO was trained on almost 30 hours of videos showing how objects perform some simple movements, and then trained the model to predict the movements of these objects in different situations.
What’s interesting is that the model finally learned the five common sense points of physics mentioned above.
When there is something counter-intuitive in the video you are watching, PLATO can also show a certain degree of surprise like a baby.
Piloto and his colleagues found that the AI model using the traditional training method (a blank piece of paper) performed well, but it was not as good as the unknown and was shocked by the comparison. PLATO, an AI model that imitates babies, performed much better.
Because of the blessing of preset cognition, the latter model can more accurately predict the movement of an object, apply the preset cognition to new object movement animations, and train the model The size of the data set used will also be smaller.
The Piloto team concluded that although learning and experience accumulation are important, it is not everything.
Their research points directly to a classic question - what is innate in humans and what is learned.
The next step is to apply this kind of human cognition to AI research.
Piloto has shown us the excellent results of the new method.
However, Piloto emphasized that PLATO is not designed to be a baby behavior model. We are just borrowing some methods of baby cognition to feed artificial intelligence.
PLATO’s simulation system: feedforward perception module (left) and loop dynamic predictor module (right)
Jeff, a computer scientist at the University of British Columbia in Vancouver Clune also said that combining AI with the learning methods of human infants is an important direction.
At this moment, Clune is working with other researchers to develop their own algorithmic methods for understanding the physical world.
Luis Piloto is the first author of the paper and the corresponding author.
He received a bachelor's degree in computer science from Rutgers University in 2012, and then went to Princeton University to study, and received master's degrees in neuroscience in 2017 and 2021 respectively. Ph.D.
In 2016, he officially joined DeepMind and became a research scientist.
The above is the detailed content of Make AI think like a baby! DeepMind 'Plato” model published in Nature sub-journal. For more information, please follow other related articles on the PHP Chinese website!