search
HomeTechnology peripheralsAIIf you have something to say, please speak! Google robot can learn and think on its own after 'eating' a large language model

"You can go to the hall, you can go to the kitchen." This is a compliment to the ideal kind wife, and I will probably say it to Google's robots in the future.

Have you ever seen a robot that comes with a large language model and can learn by itself? Don’t know how to do it? You can learn it! It doesn’t matter if you don’t know it now, you will be able to do it after a while.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

##Compared to Boston Dynamics’ extremely cool climb up the mountain of knives, down into the sea of ​​​​fire, and over the mountains and ridges, it feels like walking on flat ground. "Iron Masked King Kong", this time the "learning robot" developed by Google is more like a considerate little assistant around you. What I say and what you do are the general routines for robots to execute instructions. Google's new research this time allows robots to not only follow instructions, but also do it themselves.

This is the first time that Google has combined a large language model with a robot to teach the robot to do the same things as humans.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

Paper address: https://arxiv.org/pdf/2204.01691.pdfJust use the title of the Google paper : 「Do as I can, not as I say」.

It probably means this: "You are already a mature robot. You can do what I do. If you don't know it, you can learn it. If you are not familiar with it, you can practice it!" ” Google named this robot PaLM-SayCan. In the "Washington Post" report, the reporter saw researchers asking robots to make burgers using plastic toy ingredients. It seems that this robotic arm knows that it needs to add some ketchup after putting the meat and before putting the lettuce, but the chef currently believes that "adding ketchup" means putting the entire ketchup bottle in the burger.

Although this robot chef is not yet qualified, Google believes that with the training of a large language model, it will only be a matter of time before it learns to cook burgers. The robot can also recognize cans of 7-Up and Coca-Cola, open drawers and find a bag of potato chips. With PaLM's abstraction capabilities, it can even understand that yellow, green, and blue bowls can be compared to deserts, jungles, and oceans, respectively.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

##Different from previous robots, there were robots that made burgers, fried noodles, and pizza in the past, but they were actually completed It is a combination of clear instructions for a single action, such as "move your right arm three spaces to the left", "turn over", etc. Now Google's goal is to enable robots to understand and execute commands such as "Come and make me a hamburger," "I'm hungry, go buy me a bun," and "Go out and play ball with me."

It’s like talking to someone.

For example, when a Google artificial intelligence researcher said to the PaLM-SayCan robot: "My drink spilled, can you help?" It was in the kitchen of the Google office building Glide with wheels, use digital camera vision to spot the sponge on the counter, grab it with electric arms and bring it back.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

"This is fundamentally a different model," said Google's Brian Ichter. He is one of the authors of a recently released paper describing new advances in such robots.

Currently, robots are no longer a rarity. Millions of robots work in factories around the world, but they follow specific instructions and often focus on just one or two tasks. But building a robot that can complete a series of tasks and learn while doing it is much more complicated. For years, technology companies large and small have been working hard to build such "universal robots."

The big language model that has become popular in recent years has allowed Google to find inspiration for the development of "universal robots". Large language models use large amounts of text from the Internet to train AI software to guess the types of responses that might follow certain questions or comments.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

From BERT to GPT-3, and later to MT-NLP, with the rapid increase in the number of parameters, these Models have become so good at predicting the correct response that dealing with one often feels like having a conversation with a knowledgeable human being. With so much knowledge, wouldn’t it be a pity to just chat with others all day long? If you can talk, you can work. From chatbots to assistant robots, Google’s research ideas can be said to have come naturally.

What’s so great about this PaLM-SayCan?

This time, Google AI proposed a method in cooperation with the Everyday Robot project launched by the X team of Google parent company Alphabet’s moonshot project. That is, knowledge is extracted from a large language model (LLM) through pre-training, allowing the robot to follow high-level text instructions to complete physical tasks.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

The Everyday Robot project has been in the works for many years, with many of the team members working with Google AI joining in 2015 or 2016 Alphabet. The idea is to have robots use cameras and sophisticated machine learning algorithms to see and learn from the world around them, without having to teach them every potential situation they might encounter.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

Google’s idea is: Large language models can encode rich semantic knowledge about the world , this knowledge is very useful for robots designed to perform tasks in natural language. The obvious shortcoming of LLM is its "lack of real-world experience." If it performs perfectly in the laboratory, it may be useless in real life.

Therefore, researchers recommend "providing a real-world foundation through pre-training skills" to constrain the model to complete natural language actions that conform to the environment.

Robots can serve as the "hands and eyes" of language models, while language models provide high-level semantic knowledge/real-world experience about the task.

Google used a huge 6144-processor machine to train PaLM (Pathways Language Model). Training resources include a large collection of multilingual web documents found on Microsoft's GitHub website, books, Wikipedia articles, conversations and programming code. The AI ​​agent trained in this way can explain jokes, complete sentences, answer questions and reason according to its own chain of thinking.

The next question is, if this agent is used in a robot, how to extract and utilize the knowledge of a large language model (LLM) to complete physical tasks? For example, if my drink is spilled, GPT-3 will say you can use a vacuum cleaner, and LaMDA will say do you need me to find a cleaner for you? (It’s very confusing)

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

The large language model cannot respond to this operation because it does not interact with the real environment. The value judgment ability formed by LLM-based SayCan through the pre-trained model can handle instructions in complex and real environments.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

Inspired by this example, we investigated how to extract knowledge in LLM to enable robots to follow high-level textual instructions The problem. The robot is equipped with a set of learning skills for "atomic" behaviors capable of low-level visuomotor control. In addition to asking the LLM to simply explain instructions, we can also use it to assess the likelihood that an individual's skills will make progress toward completing high-level instructions.

Assuming that each skill has an affordance function, then the probability of its success from the current state can be quantified (such as learning the value function), and this value can measure the probability of the skill. In this way, LLM completes the description of the probability of each skill's contribution to completing instructions.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

##The researchers used two metrics to evaluate the performance of the system:

(1) Planning success rate, indicating whether the robot has selected the correct skills for the instruction;

(2) Execution success rate, indicating whether it successfully executed the instruction.

The data shows that the instruction execution rate of PaLM-SayCan is also the highest among all models.

Risk: What to do if the robot fails?

The idea is great, but this work is not without risks. The training corpus of large language models comes from the Internet, and some language models have shown negative tendencies such as racism or sexism, and are sometimes induced to publish hateful speeches or lie. If this model is used to train a chatbot, the result will be a voice assistant that can curse and gossip. But what if it is used to train a robot that has hands and feet to do bad things?

Moreover, what is more dangerous than this is that if the robot trained in this way becomes conscious, things may get out of control (there are many similar science fiction movies).

In July this year, a Google employee claimed that software is a living employee. The consensus among AI experts is that these models are not alive, but many worry they will exhibit bias because they are trained on large amounts of unfiltered, human-generated text.

Despite this, Google is still working hard. Now, researchers no longer need to code specific technical instructions for each task of the robot, but can more simply use everyday language to communicate with them. They talk. What’s more, the new software can help robots parse complex multi-step instructions on their own.

Now, robots can interpret instructions they have never heard before and come up with meaningful responses and actions on their own.

Maybe for robots, a new door has just opened, and the future may still be a long process. Artificial intelligence techniques such as neural networks and reinforcement learning have been used to train robots for years. There have been some breakthroughs, but progress is still slow.

Google’s robot is far from ready for real-world use, and researchers have repeatedly said that the robot is still in the laboratory stage and has no plans to commercialize it.

The above is the detailed content of If you have something to say, please speak! Google robot can learn and think on its own after 'eating' a large language model. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
[Ghibli-style images with AI] Introducing how to create free images with ChatGPT and copyright[Ghibli-style images with AI] Introducing how to create free images with ChatGPT and copyrightMay 13, 2025 am 01:57 AM

The latest model GPT-4o released by OpenAI not only can generate text, but also has image generation functions, which has attracted widespread attention. The most eye-catching feature is the generation of "Ghibli-style illustrations". Simply upload the photo to ChatGPT and give simple instructions to generate a dreamy image like a work in Studio Ghibli. This article will explain in detail the actual operation process, the effect experience, as well as the errors and copyright issues that need to be paid attention to. For details of the latest model "o3" released by OpenAI, please click here⬇️ Detailed explanation of OpenAI o3 (ChatGPT o3): Features, pricing system and o4-mini introduction Please click here for the English version of Ghibli-style article⬇️ Create Ji with ChatGPT

Explaining examples of use and implementation of ChatGPT in local governments! Also introduces banned local governmentsExplaining examples of use and implementation of ChatGPT in local governments! Also introduces banned local governmentsMay 13, 2025 am 01:53 AM

As a new communication method, the use and introduction of ChatGPT in local governments is attracting attention. While this trend is progressing in a wide range of areas, some local governments have declined to use ChatGPT. In this article, we will introduce examples of ChatGPT implementation in local governments. We will explore how we are achieving quality and efficiency improvements in local government services through a variety of reform examples, including supporting document creation and dialogue with citizens. Not only local government officials who aim to reduce staff workload and improve convenience for citizens, but also all interested in advanced use cases.

What is the Fukatsu-style prompt in ChatGPT? A thorough explanation with example sentences!What is the Fukatsu-style prompt in ChatGPT? A thorough explanation with example sentences!May 13, 2025 am 01:52 AM

Have you heard of a framework called the "Fukatsu Prompt System"? Language models such as ChatGPT are extremely excellent, but appropriate prompts are essential to maximize their potential. Fukatsu prompts are one of the most popular prompt techniques designed to improve output accuracy. This article explains the principles and characteristics of Fukatsu-style prompts, including specific usage methods and examples. Furthermore, we have introduced other well-known prompt templates and useful techniques for prompt design, so based on these, we will introduce C.

What is ChatGPT Search? Explains the main functions, usage, and fee structure!What is ChatGPT Search? Explains the main functions, usage, and fee structure!May 13, 2025 am 01:51 AM

ChatGPT Search: Get the latest information efficiently with an innovative AI search engine! In this article, we will thoroughly explain the new ChatGPT feature "ChatGPT Search," provided by OpenAI. Let's take a closer look at the features, usage, and how this tool can help you improve your information collection efficiency with reliable answers based on real-time web information and intuitive ease of use. ChatGPT Search provides a conversational interactive search experience that answers user questions in a comfortable, hidden environment that hides advertisements

An easy-to-understand explanation of how to create a composition in ChatGPT and prompts!An easy-to-understand explanation of how to create a composition in ChatGPT and prompts!May 13, 2025 am 01:50 AM

In a modern society with information explosion, it is not easy to create compelling articles. How to use creativity to write articles that attract readers within a limited time and energy requires superb skills and rich experience. At this time, as a revolutionary writing aid, ChatGPT attracted much attention. ChatGPT uses huge data to train language generation models to generate natural, smooth and refined articles. This article will introduce how to effectively use ChatGPT and efficiently create high-quality articles. We will gradually explain the writing process of using ChatGPT, and combine specific cases to elaborate on its advantages and disadvantages, applicable scenarios, and safe use precautions. ChatGPT will be a writer to overcome various obstacles,

How to create diagrams using ChatGPT! Illustrated loading and plugins are also explainedHow to create diagrams using ChatGPT! Illustrated loading and plugins are also explainedMay 13, 2025 am 01:49 AM

An efficient guide to creating charts using AI Visual materials are essential to effectively conveying information, but creating it takes a lot of time and effort. However, the chart creation process is changing dramatically due to the rise of AI technologies such as ChatGPT and DALL-E 3. This article provides detailed explanations on efficient and attractive diagram creation methods using these cutting-edge tools. It covers everything from ideas to completion, and includes a wealth of information useful for creating diagrams, from specific steps, tips, plugins and APIs that can be used, and how to use the image generation AI "DALL-E 3."

An easy-to-understand explanation of ChatGPT Plus' pricing structure and payment methods!An easy-to-understand explanation of ChatGPT Plus' pricing structure and payment methods!May 13, 2025 am 01:48 AM

Unlock ChatGPT Plus: Fees, Payment Methods and Upgrade Guide ChatGPT, a world-renowned generative AI, has been widely used in daily life and business fields. Although ChatGPT is basically free, the paid version of ChatGPT Plus provides a variety of value-added services, such as plug-ins, image recognition, etc., which significantly improves work efficiency. This article will explain in detail the charging standards, payment methods and upgrade processes of ChatGPT Plus. For details of OpenAI's latest image generation technology "GPT-4o image generation" please click: Detailed explanation of GPT-4o image generation: usage methods, prompt word examples, commercial applications and differences from other AIs Table of contents ChatGPT Plus Fees Ch

Explaining how to create a design using ChatGPT! We also introduce examples of use and promptsExplaining how to create a design using ChatGPT! We also introduce examples of use and promptsMay 13, 2025 am 01:47 AM

How to use ChatGPT to streamline your design work and increase creativity This article will explain in detail how to create a design using ChatGPT. We will introduce examples of using ChatGPT in various design fields, such as ideas, text generation, and web design. We will also introduce points that will help you improve the efficiency and quality of a variety of creative work, such as graphic design, illustration, and logo design. Please take a look at how AI can greatly expand your design possibilities. table of contents ChatGPT: A powerful tool for design creation

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),