Home > Article > Technology peripherals > Google has released the largest general-purpose model in history, PaLM-E, which has 562 billion parameters and is known as the most powerful brain in the Terminator and can interact with robots through images.
The rapid "mutation" of large language models has made the direction of human society increasingly science fiction. After lighting up this technology tree, the reality of "Terminator" seems to be getting closer and closer to us.
A few days ago, Microsoft just announced an experimental framework that can use ChatGPT to control robots and drones.
Of course Google is not far behind. On Monday, a team from Google and the Technical University of Berlin launched the largest visual language model in history - PaLM-E.
##Paper address: https://arxiv.org/abs/2303.03378
As a multi-modal embodied visual language model (VLM), PaLM-E can not only understand images, but also understand and generate language, and it can even combine the two to process Complex robot instructions.
In addition, through the combination of the PaLM-540B language model and the ViT-22B visual Transformer model, the final number of parameters of PaLM-E is as high as 562 billion.
A "generalist" model spanning the fields of robotics and vision-languagePaLM-E, The full name is Pathways Language Model with Embodied, which is an embodied visual language model.
Its power lies in its ability to use visual data to enhance its language processing capabilities.
What will happen when we train the largest visual language model and combine it with a robot? The result is PaLM-E, a 562 billion-parameter, universal, embodied visual language generalist—across robotics, vision, and language
According to the paper, PaLM-E is a decoder-only LLM capable of generating text completions in an autoregressive manner given a prefix or prompt.
The training data is multi-modal sentences containing visual, continuous state estimation and text input encoding.
After training with a single image prompt, PaLM-E can not only guide the robot to complete a variety of complex tasks, but also generate language to describe the image.
It can be said that PaLM-E demonstrates unprecedented flexibility and adaptability and represents a major leap forward, especially in the field of human-computer interaction.
More importantly, the researchers demonstrated that training on different hybrid task combinations of multiple robots and general visual language can lead to transfer from visual language to embodied decision-making. Several methods allow robots to effectively utilize data when planning tasks.
In addition, what is particularly outstanding about PaLM-E is that it has strong positive migration capabilities.
PaLM-E trained in different domains, including general vision-language tasks at the Internet scale, achieves significantly improved performance compared to robot models performing single tasks.
And at the model scale, the researchers observed a significant advantage.
The larger the language model, the stronger the language ability is maintained during the training of visual language and robot tasks.
From the perspective of model scale, PaLM-E with 562 billion parameters maintains almost all of its language capabilities.
Despite being trained only on a single image, PaLM-E shows outstanding capabilities in tasks such as multi-modal thinking chain reasoning and multi-image reasoning.
On the OK-VQA benchmark, PaLM-E achieved a new SOTA.
In tests, the researchers showed how Use PaLM-E to perform planning and long-span tasks on two different entities.
It is worth noting that all these results were obtained using the same model trained on the same data.
In the past, robots usually required human assistance to complete long-span tasks. But now, PaLM-E can do it through independent learning.
For example, instructions such as "Take out the potato chips from the drawer" include multiple planning steps. , and visual feedback from the robot’s camera.
PaLM-E, which has been trained end-to-end, can plan the robot directly from pixels. Because the model is integrated into a control loop, the robot is robust to disturbances along the way when picking up potato chips.
Human: Give me some potato chips.
Robot: 1. Go to the drawer 2. Open the top drawer 3. Pick up the green potato chips from the top drawer 4. Put it on the counter .
Similarly, when asking the robot to pick up an apple, after the picture is given, the steps for the robot to perform the task are as follows ——
Human: Bring me an apple.
Robot: 1. Go to the apple 2. Pick up the apple 3. Bring it to the operator 4. Put down the apple 5. Finished.
In addition to performing long-span tasks, PaLM-E can allow the robot to perform planning tasks, such as arranging building blocks.
The researchers successfully performed multi-stage planning based on visual and verbal input, and combined with long-term visual feedback, allowing the model to successfully plan a long-term task. Sort the blocks into different corners by color."
As follows, in terms of arrangement and combination, the robot transforms into a generalist and sorts the building blocks according to color.
In terms of generalization of the model, the robot controlled by PaLM-E can move the red building block to the side of the coffee cup.
It is worth mentioning that the data set only contains three demonstrations with coffee cups, but none of them include red building blocks.
Similarly, although the model has never seen a turtle before, it can still successfully push the green blocks to the turtle Next to
In terms of zero-shot inference, PaLM-E can tell a joke given an image, and show Abilities include perception, vision-based dialogue and planning.
PaLM-E also understands the relationship between multiple pictures. For example, picture 1 (left) is in the picture Which position of two (right).
Additionally, PaLM-E can perform mathematical operations given an image with handwritten digits.
For example, for the following handwritten restaurant menu, PaLM-E can directly calculate how much 2 pizzas cost.
as well as general QA and annotation and other tasks.
Finally, the findings also suggest that frozen language models are a gateway to universal embodied models that fully retain their language capabilities. A viable path forward for modal models.
But at the same time, the researchers also discovered an alternative route to unfreezing the model, namely that increasing the size of the language model can significantly reduce catastrophic forgetting.
The above is the detailed content of Google has released the largest general-purpose model in history, PaLM-E, which has 562 billion parameters and is known as the most powerful brain in the Terminator and can interact with robots through images.. For more information, please follow other related articles on the PHP Chinese website!