Home  >  Article  >  Technology peripherals  >  An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

WBOY
WBOYOriginal
2024-06-03 10:08:09652browse

Absolutely, in order to train an AI model, a professor from the State University of New York strapped a GoPro-like camera to his daughter’s head!

Although it sounds incredible, this professor’s behavior is actually well-founded.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

To train the complex neural network behind LLM, a massive amount of data is required.

Is our current LLM training process necessarily the simplest and most efficient way?

Certainly not! Scientists have discovered that in human toddlers, the brain absorbs water like a sponge, quickly forming a coherent worldview.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

Although LLM has amazing performance at times, over time, human children will become smarter and smarter than the model. Be more creative!

The secret of children mastering language

How to train LLM in a better way?

When scientists were puzzled, human cubs made their eyes light up——

The way they learn language, He is a master of language acquisition.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

We all know this story: throw a young child into a country with a completely different language and culture, and within a few months, he will... The mastery of the local language may be close to the native level.

The large language model pales in comparison.

First of all, they are too data intensive!

Nowadays, major companies that train models have almost exhausted all the data in the world. Because LLM learning requires astronomical amounts of text mined from the Internet and various places.

For them to master a language, you need to feed them trillions of words.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

Brenden Lake and the NYU scholars who participated in this study

Secondly, they smashed such a With too much data, LLM may not learn accurately.

The output of many LLMs is to predict the next word with a certain accuracy. And this accuracy is increasingly disturbing.

In stark contrast, children do not need so much experience to learn to speak a language fluently.

Brenden Lake, a psychologist at the State University of New York who studies humans and AI, is focusing on this.

He decided to conduct an experiment on his 1-year-old and 9-month-old daughter Luna.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

For the past 11 months, Lake has let her daughter wear a camera for an hour every week to record videos from her perspective while playing.

With videos captured by Luna’s cameras, Lake hopes to train models by using the same data that children are exposed to.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

Strapped a GoPro to my toddler daughter

While currently linguists and children Experts do not agree on how children acquire language, but Lake is convinced that the secret to making LLM more efficient lies in children's learning patterns!

Therefore, Lake launched a research project: studying the stimulation that children experience when learning their first words, in order to improve the efficiency of training LLM.

To do this, Lake's team needed to collect video and audio data from 25 children across the United States.

This is the scene at the beginning of the article - they tied GoPro-like cameras to the heads of these children, including Lake's daughter Luna.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

Lake explained that their model attempts to connect video clips from the child's perspective with what the child's caregiver is saying, in a similar way OpenAI’s Clip model connects annotations to images.

Clip can take an image as input and output a descriptive annotation as a suggestion based on training data of image-annotation pairs.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

Paper address: https://openai.com/index/clip/

In addition, Lake's team's model can also take as input an image of a scene, based on training data from GoPro footage and caregiver audio, and then output language to describe the scene.

Furthermore, the model can also convert descriptions to frames previously seen in training.

At first glance, doesn’t it sound quite simple? That is, the model learns to match spoken words to objects observed in video frames, just like human children.

But in actual implementation, we will still face many complex situations.

For example, children do not always look at the object or action being described.

There are even more abstract situations, such as when we give milk to our children, but the milk is in an opaque cup, which leads to a very loose connection.

Thus, Lake explained: This experiment was not intended to prove whether we can train a model to match objects in images with corresponding words (OpenAI has already demonstrated this).

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

# Instead, what the team wanted to do was to see if the model could be modeled using only the sparse data levels available to children (which are incredibly sparse). ), you can actually learn to recognize objects.

As you can see, this is completely opposite to the way big companies such as OpenAI, Google, and Meta build models.

You know, Meta used 15 trillion tokens to train Llama 3.

If the Lake team's experiment is successful, perhaps the LLM data shortage faced by the whole world will be solved - because then, training LLM will not require so much data at all!

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

In other words, the new idea is to let the AI ​​model learn from limited inputs and then generalize from the data we see .

I think our focus should not be limited to training larger and larger LLMs from more and more data. Yes, you can get amazing performance from LLM this way, but it's getting further and further away from the wonders of human intelligence we know...

Early experiments have been successful

The early experimental results have proven that the Lake team's idea may be right.

In February this year, they used 61 hours of video footage to train a neural network to record the experience of a young child.

The study found that the model was able to connect the various words and phrases spoken by the subjects to the experiences captured in the video frames - as long as the word or phrase was presented, the model Ability to recall relevant images. This paper has been published in Science.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

Paper address: https://www.science.org/doi/10.1126/science.adi1374

Lake said that the most surprising thing is that the model can generalize the names of objects in untrained images!

Of course, the accuracy may not be very good. But the model was originally just to verify a concept.

The project is not yet complete because the model has not learned everything a child would know.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

After all, it’s only about 60 hours of annotated speech, which is only one percent of the experience a child acquires in two years one. And the team needs more data to figure out what is learnable.

And Lake also admitted that the method used by the first model still has limitations-

Only analyzes the words related to the caregiver’s words Video clips are just shots converted into images at a speed of 5 frames per second. Based on these alone, AI does not really learn what verbs are and what abstract words are. It only obtains static slices of what the world looks like.

Because it knows nothing about what happened before, what happened after, or the context of the conversation, it is difficult to learn what "walking", "running" and "jumping" are.

But in the future, as the technology behind modeling videos becomes more mature, Lake believes the team will build more effective models.

If we could build a model of how language acquisition actually begins, it would open up important applications for understanding human learning and development, and perhaps help us understand development disorders, or conditions in which children learn language.

Eventually, such models could be used to test millions of different speech therapies.

Having said that, how do children solidly master a language through their own eyes and ears?

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

Let’s take a closer look at this article posted by the Lake team in Science.

Connect words with real objects and visual images

How do human children shed their ignorance of the world and acquire knowledge? The mystery of this "black box" not only attracts the constant pursuit of educationists, but also questions the origin of individual wisdom trapped in the hearts of each of us.

Korean science fiction writer Kim Cho-ye wrote this assumption in "The Symbiosis Hypothesis": The wisdom displayed by human children in their early years actually carries a lost alien civilization. They chose to coexist with humans in this way, but it only lasted for five short years. After humans grew up and had truly solid memories, they erased the magnificent memories of their childhood.

Netizens often share stories online about human cubs who "forgot to drink Meng Po soup".

Regarding the enigmatic childhood, it is a mysterious place that is difficult for us to explain and difficult to return to. It is a kind of "nostalgia". As it is written on a golden blade of grass, "Don't leave." Don’t take away that beautiful world. When I grow up, please stay with me.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

How do young children associate new words with specific objects or visual concepts?

For example, when hearing the word "ball", how do children think of elastic round objects?

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

To this end, Lake's team put a head-mounted camera on a child and tracked his growth from 6 to 25 months old, recording a 61-hour stream of visual and verbal data.

On this 1.5-year-old children's clip data set (including 600,000 video frames and 37,500 transcribed utterance pairs), the researchers trained a model, namely children's perspective contrastive learning Model CVCL.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

This model instantiates a form of associative learning across situations, identifying mappings between words and possible visual referents.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

This model coordinates the contrasting objectives of two neural networks, a visual encoder and a language encoder, and is trained in a self-supervised manner (i.e. Using only child-view recordings and no external labels), the comparison objective combines embeddings (vectors) of video frames and temporally co-occurring verbal utterances (processing embeddings of simultaneous video frames and verbal utterances)

Of course, this dataset, called SAYCam-S, is limited because it only captures about 1% of a child's waking time, missing a lot of their experience.

But despite this, CVCL can still learn powerful multi-modal representations from a child’s limited experience!

The team successfully demonstrated that the model acquires many referential mappings that exist in children's daily experiences, and is therefore able to generalize new visual referents with zero samples and adjust the visual and Linguistic concept system.

Evaluate the learned word meaning mapping

Specifically, after training was completed, the team evaluated what CVCL and various alternative models had learned The quality of the word-referent mapping.

The results show that the classification accuracy of CVCL is 61.6%.

Moreover, Figure 2D shows that for 11 of the 22 concepts, the performance of CVCL is within 5% of the error of CLIP, but the training data of CLIP is more Several orders of magnitude (400 million image-text pairs from the web).

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

The results show that many of the earliest word referent mappings can be obtained from at least 10 to 100 naturally occurring word-referent pairs. .

Generalize to new visual paradigms

In addition, the researchers also evaluated whether the words learned by CVCL can be generalized to out-of-distribution On visual stimulation.

Figure 3A shows that CVCL also shows some understanding of these visual concepts, with an overall accuracy of 34.7%.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

Obviously, this task requires a larger concept set, and the extra difficulty of out-of-distribution generalization.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

On the left are two randomly selected training cases, and on the right are four test cases. The percentage below represents the accuracy of the model in identifying this image. and performance, the selected cases are the two highest values, the median value and the lowest value respectively from left to right. It can be seen that when the test case and the training case are more similar in color and shape, the accuracy of model recognition is also higher

Multi-modal consistency is very good

Finally, the researchers tested the consistency of CVCL’s visual and language concept systems.

For example, if the visual embedding and word embedding of "car" are more similar to "road" than to "ball", this indicates that multi-modal alignment works well .

The following figure shows the high degree of alignment of CVCL's visual and language systems.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

The relationship between the image and text, the dotted line represents the distance between the visual centroid corresponding to each concept and the word embedding

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

Different visual concepts vary in how tightly their examples are clustered. Because the baby's line of sight will wander between objects that are very close, the model does not form a clear reference mapping when distinguishing "hands" and "toys". "Car" and "crib" have better performance

In each figure, we visually demonstrate the comparison of CVCL predictions with labeled examples using t-SNE.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

The blue points on the left correspond to the 100 frames belonging to a specific category, and the green points on the right correspond to the 100 highest activated frames (based on cosine similarity to the word embedding of each concept in CVCL). Below each figure are multiple example frames belonging to one or more sub-clusters within each concept, capturing how word embeddings interact with image embeddings in the joint embedding space. For example, for the word "stairs," we see one cluster representing images of indoor wooden stairs, while another main cluster represents images of a set of blue stairs outdoors. All t-SNE graphs in these figures are derived from the same set of joint image and text embeddings.

The following figure shows that the model can be positioned in different views to locate the target.

An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI

In the normalized attention map, yellow indicates the area with the highest attention. In the first two categories (ball and rook) we can see that the model can locate the target in different views. However, in the lower two categories (cat and paper), attention maps were sometimes misaligned with the referent, suggesting that the ability to locate the referent was not consistent across categories.

Of course, there are still many differences between children's learning and machine learning models.

But the Lake team’s research has undoubtedly inspired us a lot.

The above is the detailed content of An American professor used his 2-year-old daughter to train an AI model to appear in Science! Human cubs use head-mounted cameras to train new AI. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn