Home > Article > Technology peripherals > What should I do if AI can't understand "he, she, it"? Verbs have become a new breakthrough. When the robot hears butter, it knows how to pick up a knife and fork.
When giving instructions to AI, do you always feel that communicating with people is different?
Yes, AI can understand some specific human instructions, such as:
Help move a chair from the restaurant.
But if it is replaced by vague instructions with only pronouns (he/she/it/this/that/thing...) and verbs, the AI will be confused:
Help find someone who can step in s things.
Now, some researchers have finally come up with a new way to deal with it: wouldn’t it be enough to let AI learn to understand verbs?
The verb itself is bound to some specific nouns. For example, the action of "spreading butter" is definitely inseparable from nouns such as "knife" and "fork".
Only need to match them, without noun instructions such as "knife and fork", AI can also accurately find the target object:
Currently, This paper has been officially included in NeurIPS 2022, and the related model has also been open source:
So how does it train AI to understand verbs?
The paper proposes a framework called TOIST.
TOIST stands for "Task Oriented Instance Segmentation Transformer" (Task Oriented Instance Segmentation Transformer), which is a new instance segmentation solution based on Transformer.
Instance segmentation is different from the "full image cutting" of semantic segmentation. It also has the characteristics of target detection. For example, the following figure uses the noun "hatchback car" to directly find the corresponding object. :
Previously, instance segmentation models were usually divided into "two steps", the first step is to detect possible targets, and the second step is to sort the possible targets. , predict the most likely outcome.
But unlike this approach, the TOIST framework directly adopts an entire Transformer architecture, in which the self-attention mechanism in the decoder can establish the preference relationship between candidate targets.
The TOIST framework is divided into three parts.
Among them, the multi-modal encoder (brown part) is responsible for extracting feature markers, and the Transformer encoder (green part) is responsible for aggregating the features of the two modalities, and based on the Transformer decoder (blue part) attention mechanism to predict the most appropriate target.
Subsequently, the paper proposed a new noun-pronoun distillation method (noun-pronoun distillation) to train the model.
Specifically, based on the framework of knowledge distillation (teacher-student model in the picture above), AI is trained to "guess" noun prototypes based on context in an unsupervised learning manner.
For example, the original instance segmentation task is "digging holes with a skateboard", but when training the model, the noun "skateboard" will be replaced by the pronoun "something":
In this way, the AI can guess the correct noun out of thin air even if it does not know the noun, and segment the correct target in the picture:
How does this segmentation effect perform in actual cases?
The paper tested TOIST on the large-scale task data set COCO-Tasks.
The evaluation method uses mAP (mean Average Precision), which is common in visual tasks such as target detection.
To put it simply, TOIST performs better than the previous instance segmentation and target detection model SOTA model, and with the addition of the noun-pronoun distillation method, the "enhanced version" TOIST performs better than TOIST. Floors.
In the target detection task, compared with the current best Yolo GGNN, the "enhanced version" TOIST's decision box accuracy mAP increased by 10.9%. In the instance segmentation task, the mask accuracy was higher than Mask- RCNN GGNN is 6.6% higher.
As for the proposed noun-pronoun distillation method, compared with the original version of TOIST, the accuracy on the instance segmentation task has been improved by 2.8% and 3.8% respectively.
#In terms of case performance, the model effect is also very close to the actual segmentation true value.
For example, in Figure (d), the algorithm even recognized that the beer bottle cap can be opened using a table. It can be said that the understanding ability is perfect:
For doing this Regarding the original intention of the research, the author responded:
Our laboratory is actually responsible for researching robots, but during daily surveys, we found that users sometimes prefer to describe "needs" to robots. Rather than directly telling the robot what to do.
In other words, AI algorithms are used to make the robot "think one more step" instead of just being an assistant that follows orders.
The authors of this paper come from Tsinghua University Intelligent Industry Research Institute (AIR), Peking University and Intel Research Institute. Zhang Yaqin, dean of AIR, is also one of the authors.
Li Pengfei, the first author of the paper, is a doctoral candidate at the Institute of Intelligent Industry of Tsinghua University. He graduated from the University of Chinese Academy of Sciences with a bachelor's degree. His research interests include autonomous driving and computer vision.
The corresponding author, Zhao Hao, is an incoming Assistant Professor at the Intelligent Industry Research Institute of Tsinghua University, a research scientist at Intel China Research Institute, and a joint postdoctoral fellow at Peking University. He graduated from Tsinghua University in electronic engineering. Department, research interests are in the direction of robotics and computer vision.
Paper address: https://arxiv.org/abs/2210.10775
Project address: https://github.com/AIR-DISCOVER/ TOIST
The above is the detailed content of What should I do if AI can't understand "he, she, it"? Verbs have become a new breakthrough. When the robot hears butter, it knows how to pick up a knife and fork.. For more information, please follow other related articles on the PHP Chinese website!