Home  >  Article  >  Technology peripherals  >  A brief analysis of the development of human-computer interaction in smart cockpits

A brief analysis of the development of human-computer interaction in smart cockpits

PHPz
PHPzforward
2023-04-12 20:25:04811browse

At present, cars have not only changed in terms of power sources, driving methods and driving experience, but the cockpit has also bid farewell to the traditional boring mechanical and electronic space. The level of intelligence has soared, becoming a part of people's lives outside of home and office. The "third space" after that. Through high technologies such as face and fingerprint recognition, voice/gesture interaction, and multi-screen linkage, today's smart cockpits in automobiles have significantly enhanced their capabilities in environmental perception, information collection and processing, and have become "intelligent assistants" for human driving.

One of the significant signs that smart cockpit bids farewell to simple electronics and enters the intelligent assistant stage is that the interaction between humans and the cockpit changes from passive to active. This "passive" And "active" is defined centered on the cockpit itself. In the past, information exchange was mainly initiated by people, but now it can be initiated by both people and machines. The level of interaction between people and machines has become an important symbol for defining the level of smart cockpit products.

A brief analysis of the development of human-computer interaction in smart cockpits

Human-computer interaction development background

It can be reflected from the history of computers and mobile phones The development of interaction methods between machines and humans, from complexity to simplicity, from abstract movements to natural interaction. The most important development trend of human-computer interaction in the future is to move machinery from passive response to active interaction. Looking at the extension of this trend, the ultimate goal of human-machine interaction is to anthropomorphize machines, making the interaction between humans and machines as natural and smooth as communication between humans. In other words, the history of human-computer interaction is the history of people adapting from machines to adapting to people through machines.

The development of smart cockpits has a similar process. With the advancement of electronic technology and the expectations of car owners, there are more and more electronic signals and functions inside and outside the car, so that car owners can reduce the waste of attention resources, thereby reducing driving distraction. As a result, car interaction methods have also gradually changed: Physical knob/keyboard - digital touch screen - language control - natural state interaction.

Natural interaction is the ideal model for the next generation of human-computer interaction

What is natural interaction?

#In short, communication is achieved through movement, eye tracking, language, etc. The consciousness modality here is more specifically similar to human "perception". Its form is mixed with various perceptions and corresponds to the five major human perceptions of vision, hearing, touch, smell, and taste. Corresponding information media include various sensors, such as sound, video, text and infrared, pressure, and radar. A smart car is essentially a manned robot. Its two most critical functions are self-control and interaction with people. Without one of them, it will not be able to work efficiently with people. Therefore, an intelligent human-computer interaction system is very necessary.

How to realize natural interaction

#More and more sensors are integrated into the cockpit, and the sensors improve the form Capabilities for diversity, data richness and accuracy. On the one hand, it makes the computing power demand in the cockpit leap forward, and on the other hand, it also provides better perception capability support. This trend makes it possible to create richer cockpit scene innovations and better interactive experiences. Among them, visual processing is the key to cockpit human-computer interaction technology. And fusion technology is the real solution. For example, when it comes to speech recognition in noisy conditions, microphones alone are not enough. In this case, people can selectively listen to someone's speech, not only with their ears, but also with their eyes. Therefore, by visually identifying the sound source and reading lips, it is possible to obtain better results than simple voice recognition. If the sensor is the five senses of a person, then the computing power is an automatically interactive human brain. The AI ​​algorithm combines vision and speech. Through various cognitive methods, it can process various signals such as face, movement, posture, and voice. identification. As a result, more intelligent human target interaction can be achieved, including eye tracking, speech recognition, spoken language recognition linkage and driver fatigue status detection, etc.

The design of cockpit personnel interaction usually needs to be completed through edge computing rather than cloud computing. Three points: security, real-time and privacy security. Cloud computing relies on the network. For smart cars, relying on wireless networks cannot guarantee the reliability of their connections. At the same time, the data transmission delay is uncontrollable and smooth interaction cannot be guaranteed. To ensure a complete user experience for automated security domains, the solution lies in edge computing.

However, personal information security is also one of the problems faced. The private space in the cab is particularly safe in terms of safety. Today's personalized voice recognition is mainly implemented on the cloud, and private biometric information such as voiceprints can more conveniently display private identity information. By using the edge AI design on the car side, private biometric information such as pictures and sounds can be converted into car semantic information and then uploaded to the cloud, thus effectively ensuring the security of the car's personal information.

In the era of autonomous driving, interactive intelligence must match driving intelligence

In the foreseeable future, no one will Collaborative flight with aircraft will become a long-standing phenomenon, and drone interaction in the cockpit will become the first interface for people to master active flight skills. Currently, the field of intelligent driving faces the problem of uneven evolution. The level of human-computer interaction lags behind the improvement of autonomous driving, causing frequent autonomous driving problems and hindering the development of autonomous driving. The characteristic of human-computer interaction cooperation behavior is the human operation loop. Therefore, the human-computer interaction function must be consistent with the autonomous driving function. Failure to do so will result in serious expected functional safety risks, which are associated with the vast majority of fatal autonomous driving incidents. Once the human-computer interaction interface can provide the cognitive results of one's own driving, the energy boundary of the autonomous driving system can be further understood, which will greatly help improve the acceptance of L-level autonomous driving functions.

Of course, the current smart cockpit interaction method is mainly an extension of the mobile phone Android ecosystem, mainly supported by the host screen. Today's monitors are getting larger and larger, and this is actually because low-priority functions occupy the space of high-priority functions, causing additional signal interference and affecting operational safety. In the future, although physical displays will still exist, I believe that in the future, they will be replaced by natural human-computer interaction AR-HUD.

If the intelligent driving system is developed to L4 or above, people will be liberated from boring and tiring driving, and cars will become "people's third living space." In this way, the locations of the entertainment area and safety functional area (human-computer interaction and automatic control) in the cab will be changed in the future, and the safety area will become the main control area. Autonomous driving is the interaction between cars and the environment, and the interaction between people is the interaction between people and cars. The two are integrated to complete the collaboration of people, cars, and the environment, forming a complete closed loop of driving.

Second, the automatic dialogue AR-HUD dialogue interface is safer. When communicating with words or gestures, it can avoid diverting the driver's attention, thereby improving driving efficiency. Safety. This is simply not possible on a large cockpit screen, but ARHUD circumvents this problem while displaying autonomous driving sensing signals.

Third, the natural conversation method is an implicit, concise, and emotional natural conversation method. You can't occupy too much valuable physical space in the car, but you can be with the free person anytime and anywhere. Therefore, in the future, the intra-domain integration of smart driving and smart cockpit will be a safer development method, and the final development will be the central system of the car.

Practical principles of human-computer interaction

Touch interaction

The early center console screen only displayed radio information, and most of the area accommodated a large number of physical interaction buttons. These buttons basically achieved communication with humans through tactile interaction.

With the development of intelligent interaction, large screens for central control have appeared, and physical interaction buttons have begun to gradually decrease. The large central control screen is getting larger and larger, occupying an increasingly important position. The physical buttons on the center console have been reduced to none. At this time, the occupants can no longer interact with people through touch. However, at this stage, it gradually changes to visual interaction. People no longer use touch to communicate with others, but mainly use vision to communicate. operate. But it will be absolutely inconvenient for people to communicate with humans in the smart cockpit using only vision. Especially during driving, 90% of human visual attention must be devoted to observing road conditions, so that they can focus on the screen for a long time and talk to the smart cockpit.

Voice interaction

(1) Principle of voice interaction.

Understanding of natural speech - speech recognition - speech into speech.

(2) Scenarios required for voice interaction.

There are two main elements in the scenario application of voice control. One is that it can replace functions without prompts on the touch screen and have a natural dialogue with the human-machine interface. The other is that it minimizes the human-machine interface. The impact of manual control improves safety.

First, when you come home from get off work, you want to quickly control the vehicle, query information, and check air conditioning, seats, etc. while driving. On long journeys, investigate service areas and gas stations along the way, and investigate the schedule. The second is to use voice to link everything. Music and sub-screen entertainment in the car can be quickly evoked. So what we have to do is quickly control the vehicle.

The first is to quickly control the car. The basic functions include adjusting the ambient lighting in the car, adjusting the volume, adjusting the air conditioning temperature control in the car, adjusting the windows, adjusting the rearview mirror, and quickly controlling the vehicle. The original intention is to allow the driver to control the vehicle more quickly, reducing distractions and helping to increase the safe operation factor. Remote language interaction is an important entrance to the implementation of the entire system, because the system must understand the driver's voice instructions and provide intelligent navigation. Not only can we passively accept tasks, but we can also provide you with additional services such as destination introduction and schedule planning.

Next, there is the monitoring of the vehicle and driver. During real-time operation, the performance and status of the vehicle such as tire pressure, tank temperature, coolant, and engine oil can be inquired at any time. . Real-time information query helps drivers process information in advance. Of course, you should also pay attention in real time when reaching the warning critical point. In addition to internal monitoring, external monitoring is of course also required. Mixed monitoring of biometrics and voice monitoring can monitor the driver's emotions. Remind the driver to cheer up at the appropriate time to avoid traffic accidents. As well as precautions for fatigue sounds caused by long-term driving. Finally, in terms of multimedia entertainment, driving scenes, playing music and radio are the most frequent operations and needs. In addition to simple functions such as play, pause, and song switching, the development of personalized functions such as collection, account registration, opening of play history, switching of play order, and on-site interaction are also awaiting.

Accommodating Errors

Fault tolerance mechanisms must be allowed in voice conversations. Basic fault tolerance is also handled on a scenario-by-scenario basis. The first is that the user does not understand, and the user is asked to say it again. The second is that the user has listened but does not have the ability to handle the problem. The third is that it is recognized as an error message, which can be confirmed again.

Face recognition

(1) Principle of face recognition.

Facial feature recognition technology in the cockpit generally includes the following three aspects: facial feature inspection and pattern recognition. As the overall information on the Internet becomes biogenic, facial information is input on multiple platforms, and cars are a focus of the Internet of Everything. As more mobile terminal usage scenarios move to the car, account registration and identity authentication need to be performed in the car.

(2) Face recognition usage scenarios.

Before driving, you must get in the car to verify the car owner information and register the application ID. During walking, facial recognition is the main work scenario for fatigue with eyes closed while walking, phone reminder, no eyesight, and yawning.

Mere interactions can make it more inconvenient for the driver. For example, using voice alone is prone to misdirections and simple touch operations, and the driver cannot meet the 3-second principle. Only when multiple interaction methods such as voice, gestures, and vision are integrated can the intelligent system communicate with the driver in various scenarios more accurately, conveniently, and safely.

Challenges and future of human-computer interaction

Challenges of human-computer interaction

The ideal natural interaction starts with the user’s experience and creates a safe, smooth, and predictable interactive experience. But no matter how rich life is, we must always start from the facts. There are still many challenges at present.

At present, misrecognition of natural interactions is still very serious, and reliability and accuracy in all working conditions and all-weather are far from enough. Therefore, in gesture recognition, the gesture recognition rate based on vision is still very low, so various algorithms must be developed to improve the accuracy and speed of recognition. Unintentional gestures may be mistaken for command actions, but in fact this is just one of countless misunderstandings. In the case of movement, the projection, vibration, and occlusion of light are all major technical issues. Therefore, in order to reduce the misrecognition rate, various technical means need to be comprehensively supported by using multi-sensor fusion verification methods, sound confirmation and other methods to match the operating scenario. Secondly, the current smoothness problem of natural interaction is still a difficulty that must be overcome, requiring more advanced sensors, more powerful capabilities, and more efficient computing. At the same time, natural language processing capabilities and intention expression are still in their infancy, and require in-depth research on algorithmic technology.

In the future, cockpit human-computer interaction will move toward the virtual world and emotional connection

One of the reasons why consumers are willing to pay for additional intelligent functions beyond car mobility is conversation and experience. We mentioned above that the development of smart cockpits in the future is people-centered, and it will evolve into the third space in people's lives.

This kind of human-computer interaction is by no means a simple call response, but a multi-channel, multi-level, multi-mode communication experience. From the perspective of the occupants, the future intelligent cockpit human-computer interaction system will use intelligent language as the main communication method, and use touch, gestures, dynamics, expressions, etc. as auxiliary communication methods to free the occupants' hands and eyes to reduce the risk of driver manipulation.

With the increase of sensors in the cockpit, it is a certain trend that the human-computer interaction service object shifts from the driver as the center to the full-vehicle passenger service. The smart cockpit builds a virtual space, and the natural interaction between people will bring a new immersive extended reality entertainment experience. The powerful configuration, combined with the powerful interactive equipment in the cockpit, can build an in-car metaverse and provide various immersive games. Smart cockpits may be a good carrier for original space.

The natural interaction between man and machine also brings emotional connection. The cockpit becomes a human companion, a more intelligent companion, learning the behavior, habits and preferences of the car owner, and sensing the inside of the cockpit. environment, combined with the vehicle's current location, to proactively provide information and functional prompts when needed. With the development of artificial intelligence, in our lifetime, we have the opportunity to see human emotional connections gradually participate in our personal lives. Ensuring that technology is good may be another major issue we must face at that time. But no matter what, technology will develop in this direction.

Summary of human-computer interaction in intelligent cockpit

#In the current fierce competition in the automobile industry, artificial intelligence cockpit systems have become The key issue to realize the functional differentiation of the whole machine factory is that the cockpit human-computer interaction system is closely related to people's communication behavior, language and culture, etc., so it needs to be highly localized. Intelligent vehicle human-computer interaction is an important breakthrough for the brand upgrade of Chinese intelligent vehicle companies and a breakthrough for China's intelligent vehicle technology to guide the world's technological development trends.

The integration of these interactions and interactions will provide a more comprehensive immersive experience in the future, continue to promote the maturity of new interaction methods and technologies, and hope to evolve from the current experience-enhancing functions to A must-have feature for future smart cockpits. In the future, smart cockpit interaction technology is expected to cover a variety of travel needs, whether it is basic safety needs or deeper psychological needs of belonging and self-realization.

The above is the detailed content of A brief analysis of the development of human-computer interaction in smart cockpits. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete