Home > Article > Technology peripherals > The NPC with high emotional intelligence is here. As soon as it reaches out its hand, it is ready to cooperate with the next move.
In the fields of virtual reality, augmented reality, games and human-computer interaction, it is often necessary to allow virtual characters to interact with players outside the screen. This interaction is real-time, requiring the virtual character to dynamically adjust according to the operator's movements. Some interactions also involve objects, such as moving a chair with an avatar, which requires special attention to the precise movements of the operator's hands. The emergence of intelligent and interactive virtual characters will greatly enhance the social experience between human players and virtual characters and bring a new way of entertainment.
In this study, the author focuses on the interaction tasks between humans and virtual humans, especially the interaction tasks involving objects, and proposes a new task called online whole-body action response synthesis . The new task will generate virtual human reactions based on human movements. Previous research mainly focused on human-to-human interaction, without considering the objects in the task, and the generated body reactions did not include hand movements. In addition, previous work did not treat tasks as online reasoning. In actual situations, virtual humans predict the next step based on the implementation situation.
To support the new task, the authors first constructed two datasets, named HHI and CoChair respectively, and proposed a unified method. Specifically, the authors first construct a social affordance representation. To do this, they select a social affordance vector, learn a local coordinate system for the vector using an SE (3) equivariant neural network, and finally normalize its social affordances. In addition, the author also proposes a scheme for social affordance prediction to enable virtual humans to make decisions based on predictions.
Research results show that this method can effectively generate high-quality reaction actions on the HHI and CoChair data sets, and can achieve a real-time inference speed of 25 frames per second on an A100. In addition, the authors also demonstrate the effectiveness of the method through verification on existing human interaction datasets Interhuman and Chi3D.
##Please refer to the following paper address for more detailed information :[https://arxiv.org/pdf/2312.08983.pdf]. Hope this helps players who are still looking for a way to solve the puzzle.
Please visit the project homepage https://yunzeliu.github.io/iHuman/ for more information on puzzle-solving methods.
In this article, the author constructed two datasets to support the online whole-body action response synthesis task. One of them is the data set HHI of two-person interaction, and the other is the data set CoChair of two-person interaction with objects. These two datasets provide researchers with valuable resources to further explore the field of full-body motion synthesis. The HHI dataset records various interactions between two people, while the CoChair dataset records interactions between two people and objects. The establishment of these data sets provides researchers with more experiments
The HHI data set is a large-scale whole body action response data set. Contains 30 interaction categories, 10 pairs of human skeleton types and a total of 5000 interaction sequences.
The HHI data set has three characteristics. The first feature is the inclusion of multi-person full-body interaction, including body and hand interaction. The author believes that in multi-person interactions, the interaction of hands cannot be ignored. During handshakes, hugs and handovers, rich information is transmitted through hands. The second feature is that the HHI data set can distinguish clear behavioral initiators and responders. For example, in situations such as shaking hands, pointing in the direction, greetings, handovers, etc., the HHI dataset can identify the initiator of the action, which helps researchers better define and evaluate the problem. The third feature is that the HHI data set contains more diverse types of interactions and reactions, not only including 30 types of interactions between two people, but also providing multiple reasonable reactions to the same actor. For example, when someone greets you, you can respond with a nod, with one hand, or with both hands. This is also a natural feature, but it has rarely been paid attention to and discussed in previous datasets.
CoChair is a large-scale multi-person and object interaction dataset, which includes 8 different chairs, 5 interaction modes and 10 pairs of different skeletons, for a total of 3000 sequences. CoChair has two important characteristics: First, CoChair has information asymmetry in the collaboration process. Every action has an executor/initiator (who knows the destination of the carry-on) and a responder (who doesn't know the destination). Second, it has various carrying modes. The data set includes five carrying modes: one-hand fixed carry, one-hand mobile carry, two-hand fixed carry, two-hand mobile carry and two-hand flexible carry.
Social affordance vector refers to the object or person that encodes social affordance information . When humans interact with virtual humans, humans typically come into contact with the virtual humans directly or indirectly. And when it comes to objects, humans typically touch objects.
In order to simulate direct or potential contact information in an interaction, a vector needs to be selected to simultaneously represent the human being, the vector itself, and the relationship between them. In this study, the carrier refers to objects or virtual human templates that humans may come into contact with.
Based on this, the author defines a carrier-centered representation of social affordances. Specifically, given a vector, we encode human behavior to obtain a dense human-vehicle joint representation. Based on this representation, the authors propose a social affordance representation that contains the actions of human actions, the dynamic geometric characteristics of the vector, and the person-vehicle relationship at each time step.
It should be noted that the social affordance representation refers to the data flow from the starting moment to a specific time step, rather than the representation of a single frame. The advantage of this method is that it closely associates local areas of the carrier with human behavioral movements, forming a representation that is convenient for network learning.
Through social affordance representation, the author further adopts social affordance normalization to simplify the expression space. The first step is to learn the local framework of the vector. Through the SE (3) equivariant network, the local coordinate system of the carrier is learned. Specifically, human actions are first converted into actions in each local coordinate system. Next, we densely encode the human character’s actions from each point’s perspective to obtain a dense vector-centric action representation. This can be thought of as binding an "observer" to each local point on the vehicle, with each "observer" encoding human actions from a first-person perspective. The advantage of this approach is that while modeling the information generated by contact between humans, virtual humans, and objects, social affordance normalization simplifies the distribution of social affordances and facilitates network learning.
In order to predict the behavior of humans interacting with virtual humans, the author proposes a social affordance prediction module. In real situations, virtual humans can only observe the historical dynamics of human behavior. The author believes that virtual humans should have the ability to predict human behavior in order to better plan their own actions. For example, when someone raises their hand and comes towards you, you might assume they are about to shake your hand and be prepared to receive it.
During the training phase, the virtual human can observe all human actions. During the real-world prediction phase, virtual humans can only observe the past dynamics of human behavior. The proposed prediction module can predict the actions that humans will take to improve the perception of virtual humans. The authors use a motion prediction module to predict the actions of human actors and the actions of objects. In the two-person interaction, the author used HumanMAC as the prediction module. In the two-person-object interaction, the author built a motion prediction module based on InterDiff and added a prior condition that the person-object contact is stable to simplify the difficulty of predicting object motion.
Quantitative testing shows that the research method outperforms existing methods in all metrics. To verify the effectiveness of each design in the method, the authors conducted ablation experiments on the HHI dataset. It can be seen that the performance of this method drops significantly without social affordance normalization. This suggests that using social affordance normalization to simplify feature space complexity is necessary. Without social affordance prediction, our method loses the ability to predict human actor actions, resulting in performance degradation. In order to verify the necessity of using the local coordinate system, the author also compared the effect of using the global coordinate system, and it can be seen that the local coordinate system is significantly better. This also demonstrates the value of using local coordinate systems to describe local geometry and potential contacts.
It can be seen from the visualization results that compared with the past, the virtual characters trained using the method in the article react faster and can better Capture local gestures accurately and generate more realistic and natural grasping actions in collaboration.
For more research details, please see the original paper.
The above is the detailed content of The NPC with high emotional intelligence is here. As soon as it reaches out its hand, it is ready to cooperate with the next move.. For more information, please follow other related articles on the PHP Chinese website!