search
HomeTechnology peripheralsAIGoogle uses a large model to train a robot dog to understand vague instructions and is excited to go on a picnic

Simple and effective interaction between humans and four-legged robots is the way to create capable intelligent assistant robots, pointing to a future where technology improves our lives in ways beyond our imagination. For such human-robot interaction systems, the key is to give the quadruped robot the ability to respond to natural language commands.

#Large language models (LLM) have developed rapidly recently and have shown the potential to perform high-level planning. However, it is still difficult for LLM to understand low-level instructions, such as joint angle targets or motor torques, especially for legged robots that are inherently unstable and require high-frequency control signals. Therefore, most existing work assumes that the LLM has been provided with a high-level API that determines the robot's behavior, which fundamentally limits the expressive capabilities of the system.

In the CoRL 2023 paper "SayTap: Language to Quadrupedal Locomotion", Google DeepMind and the University of Tokyo proposed a new method that uses foot contact patterns as connections A bridge between human natural language instructions and motion controllers that output low-level commands.

Google uses a large model to train a robot dog to understand vague instructions and is excited to go on a picnic

  • Paper address: https://arxiv.org/abs/2306.07580
  • Project website: https://saytap.github.io/

The foot contact pattern refers to the foot contact pattern of a quadrupedal agent when it moves. The order and manner in which they are placed on the ground. Based on this, they developed an interactive quadruped robot system that allows users to flexibly develop different movement behaviors. For example, users can use simple language to command the robot to walk, run, jump or perform other actions.

Their contributions include an LLM prompt design, a reward function, and a method that enables the SayTap controller to use feasible contact pattern distributions.

Research shows that the SayTap controller can implement multiple motion modes, and these capabilities can also be transferred to real robot hardware.

SayTap Method

The SayTap method uses a contact pattern template that is a 4 From top to bottom, each row of the matrix gives the foot contact pattern of the left forefoot (FL), right forefoot (FR), left rearfoot (RL), and right rearfoot (RR) respectively. SayTap's control frequency is 50 Hz, which means each 0 or 1 lasts 0.02 seconds. This study defines the desired foot contact pattern as a cyclic sliding window of size L_w and shape 4 X L_w. The sliding window extracts the quadruped grounding flags from the contact pattern template, which indicate whether the robot foot was on the ground or in the air between times t 1 and t L_w. The figure below gives an overview of the SayTap method.

Google uses a large model to train a robot dog to understand vague instructions and is excited to go on a picnic

SayTap method overview

##SayTap introduction The desired foot contact patterns serve as a new interface between natural language user commands and motion controllers. The motion controller is used to perform primary tasks (such as following a specified speed) and to place the robot foot on the ground at specific times so that the achieved foot contact pattern is as close as possible to the desired contact pattern.

To do this, at each time step, the motion controller takes as input the desired foot contact pattern, plus proprioceptive data such as joint position and velocity) and task-related inputs (such as user-specific velocity commands). DeepMind used reinforcement learning to train the motion controller and represented it as a deep neural network. During training of the controller, the researchers used a random generator to sample the desired foot contact patterns and then optimized the policy to output low-level robot actions that achieve the desired foot contact patterns. At test time, LLM is used to translate user commands into foot contact patterns.

Google uses a large model to train a robot dog to understand vague instructions and is excited to go on a picnic

SayTap uses foot contact patterns as a bridge between natural language user commands and low-level control commands. SayTap supports both simple and direct instructions (such as "Slowly jog forward") and vague user commands (such as "Good news, we are going to have a picnic this weekend!"). Through motion controllers based on reinforcement learning, four The foot robot reacts according to the commands.

Research shows that using properly designed prompts, LLM has the ability to accurately map user commands to specific formats foot contact pattern templates, even if the user commands are unstructured or ambiguous. During training, the researchers used a random pattern generator to generate multiple contact pattern templates, which have different pattern lengths T, based on a given step. The foot-to-ground contact ratio of state type G in one cycle enables the motion controller to learn on a wide range of motion pattern distributions and obtain better generalization capabilities. Please refer to the paper for more details.

Experimental results

#Using a simple prompt containing only three common foot contact pattern context samples, LLM can Various human commands are accurately translated into contact patterns and even generalized to situations where there is no explicit specification of how the robot should behave.

SayTap prompt is concise and compact, containing four Components:

(1) General description used to describe the tasks that the LLM should complete;
(2) Gait definition, used Remind LLM to focus on basic knowledge about quadrupedal gaits and their association with emotions;
(3) Output format definition;
(4) Demonstrate examples to let LLM learn in context Situation.

The researchers also set five speeds so that the robot can move forward or backward, fast or slow, or stay still.

Follow simple and direct commands

#The animation below shows an example of SayTap successfully executing a direct and clear command .Although some commands are not included in the three context examples, LLM can still be guided to express the internal knowledge it learned in the pre-training stage. This will use the "gait definition module" in prompt, which is the above The second module in prompt.

Google uses a large model to train a robot dog to understand vague instructions and is excited to go on a picnic

Google uses a large model to train a robot dog to understand vague instructions and is excited to go on a picnic

##Follow unstructured or vague commands

But even more interesting is SayTap’s ability to handle unstructured and ambiguous instructions. It only takes a few hints to link certain gaits to general emotional impressions, such as the robot jumping up and down after hearing something exciting (like "Let's go on a picnic!"). In addition, it can accurately represent scenes. For example, when told that the ground is very hot, the robot will move quickly to keep its feet from touching the ground as little as possible.

Google uses a large model to train a robot dog to understand vague instructions and is excited to go on a picnic

Google uses a large model to train a robot dog to understand vague instructions and is excited to go on a picnic

Google uses a large model to train a robot dog to understand vague instructions and is excited to go on a picnic

##


Summary and future work

SayTap is an interactive system for quadruped robots, It allows users to flexibly develop different movement behaviors. SayTap introduces desired foot contact patterns as an interface between natural language and low-level controllers. The new interface is both straightforward and flexible, and it allows the robot to follow both direct instructions and commands that do not explicitly state how the robot should behave.

Researchers at DeepMind said that a major future research direction is to test whether commands that imply specific feelings can enable LLM to output the desired gait. In the gait definition module of the above results, the researchers provided a sentence that linked happy emotions to the jumping gait. Providing more information might enhance LLM's ability to interpret commands, such as decoding implicit feelings. In experimental evaluations, the link between happy emotions and a bouncing gait allowed the robot to behave energetically while following vague human instructions. Another interesting future research direction is the introduction of multi-modal inputs, such as video and audio. Theoretically, the foot contact patterns translated from these signals are also suitable for the newly proposed workflow here and are expected to open up more interesting use cases.

Original link: https://blog.research.google/2023/08/saytap-language-to-quadrupedal.html

The above is the detailed content of Google uses a large model to train a robot dog to understand vague instructions and is excited to go on a picnic. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:机器之心. If there is any infringement, please contact admin@php.cn delete
Tesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserTesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserApr 22, 2025 am 11:48 AM

Since 2008, I've championed the shared-ride van—initially dubbed the "robotjitney," later the "vansit"—as the future of urban transportation. I foresee these vehicles as the 21st century's next-generation transit solution, surpas

Sam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailSam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailApr 22, 2025 am 11:29 AM

Revolutionizing the Checkout Experience Sam's Club's innovative "Just Go" system builds on its existing AI-powered "Scan & Go" technology, allowing members to scan purchases via the Sam's Club app during their shopping trip.

Nvidia's AI Omniverse Expands At GTC 2025Nvidia's AI Omniverse Expands At GTC 2025Apr 22, 2025 am 11:28 AM

Nvidia's Enhanced Predictability and New Product Lineup at GTC 2025 Nvidia, a key player in AI infrastructure, is focusing on increased predictability for its clients. This involves consistent product delivery, meeting performance expectations, and

Exploring the Capabilities of Google's Gemma 2 ModelsExploring the Capabilities of Google's Gemma 2 ModelsApr 22, 2025 am 11:26 AM

Google's Gemma 2: A Powerful, Efficient Language Model Google's Gemma family of language models, celebrated for efficiency and performance, has expanded with the arrival of Gemma 2. This latest release comprises two models: a 27-billion parameter ver

The Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaThe Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaApr 22, 2025 am 11:21 AM

This Leading with Data episode features Dr. Kirk Borne, a leading data scientist, astrophysicist, and TEDx speaker. A renowned expert in big data, AI, and machine learning, Dr. Borne offers invaluable insights into the current state and future traje

AI For Runners And Athletes: We're Making Excellent ProgressAI For Runners And Athletes: We're Making Excellent ProgressApr 22, 2025 am 11:12 AM

There were some very insightful perspectives in this speech—background information about engineering that showed us why artificial intelligence is so good at supporting people’s physical exercise. I will outline a core idea from each contributor’s perspective to demonstrate three design aspects that are an important part of our exploration of the application of artificial intelligence in sports. Edge devices and raw personal data This idea about artificial intelligence actually contains two components—one related to where we place large language models and the other is related to the differences between our human language and the language that our vital signs “express” when measured in real time. Alexander Amini knows a lot about running and tennis, but he still

Jamie Engstrom On Technology, Talent And Transformation At CaterpillarJamie Engstrom On Technology, Talent And Transformation At CaterpillarApr 22, 2025 am 11:10 AM

Caterpillar's Chief Information Officer and Senior Vice President of IT, Jamie Engstrom, leads a global team of over 2,200 IT professionals across 28 countries. With 26 years at Caterpillar, including four and a half years in her current role, Engst

New Google Photos Update Makes Any Photo Pop With Ultra HDR QualityNew Google Photos Update Makes Any Photo Pop With Ultra HDR QualityApr 22, 2025 am 11:09 AM

Google Photos' New Ultra HDR Tool: A Quick Guide Enhance your photos with Google Photos' new Ultra HDR tool, transforming standard images into vibrant, high-dynamic-range masterpieces. Ideal for social media, this tool boosts the impact of any photo,

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software