Home >Technology peripherals >AI >An article briefly analyzing multi-sensor fusion for autonomous driving
The core of autonomous driving lies in the car, so what is the intelligent network connection system? The carrier of intelligent network connection is also the car, but the core is the network that needs to be connected. One is a network composed of sensors and intelligent control systems inside the car, and the other is a network connected and shared by all cars. Network connection is to put a car into a large network to exchange important information, such as location, route, speed and other information. The development goal of the intelligent network system is to improve the safety and comfort of the car through the design optimization of the car's internal sensors and control systems, making the car more humane. Of course, the ultimate goal is to achieve driverless driving.
The three core auxiliary systems of autonomous vehicles: environment perception system, decision-making and planning system, and control and execution system. These are also the three key technologies that the intelligent networked vehicle itself must solve. question.
What is environment awareness technology and what does it mainly include?
Environment perception mainly includes three aspects: sensors, perception and positioning. Sensors include cameras, millimeter-wave radar, lidar, and ultrasonic waves. Different sensors are placed on the vehicle to collect data, identify colors, and measure distances.
If a smart car wants to use the data obtained by the sensor to achieve intelligent driving, the data obtained through the sensor must be processed by a (perception) algorithm and calculated Data results are generated to realize the exchange of information about vehicles, roads, people, etc., so that vehicles can automatically analyze whether the vehicle is driving safely or dangerously, so that vehicles can achieve intelligent driving according to people's wishes, and ultimately replace people in making decisions and autonomous driving goals. .
Then there will be a key technical issue here. Different sensors play different roles. How can the data scanned by multiple sensors form a complete object image data?
Multi-sensor fusion technology
The main function of the camera is to identify the color of objects, but it will be affected by rainy weather ; Millimeter wave radar can make up for the disadvantages of cameras being affected by rainy days, and can identify relatively distant obstacles, such as pedestrians, roadblocks, etc., but cannot identify the specific shape of obstacles; lidar can make up for the inability of millimeter wave radar to identify obstacles. Disadvantages of the specific shape; Ultrasonic radar mainly identifies short-range obstacles on the vehicle body, and is often used in the vehicle parking process. In order to fuse the external data collected from different sensors to provide a basis for the controller to make decisions, it is necessary to process the multi-sensor fusion algorithm to form a panoramic perception.
The basic principle of multi-sensor fusion is just like the process of comprehensive information processing by the human brain. Various sensors are used to complement and optimize the combination of information at multiple levels and in multiple spaces, and finally produce a pair of observations. Consistent interpretation of the environment. In this process, multi-source data must be fully utilized for reasonable control and use, and the ultimate goal of information fusion is to derive more useful information through multi-level and multi-aspect combinations of information based on the separated observation information obtained by each sensor. This not only takes advantage of the cooperative operation of multiple sensors, but also comprehensively processes data from other information sources to improve the intelligence of the entire sensor system.
The concept of multi-sensor data fusion was first used in the military field. In recent years, with the development of autonomous driving, various radars have been used to target vehicles. detection. Because different sensors have data accuracy issues, how to determine the final fused data? For example, if the lidar reports that the distance to the vehicle in front is 5m, the millimeter-wave radar reports that the distance to the vehicle in front is 5.5m, and the camera determines that the distance to the vehicle in front is 4m, how should the central processor make the final judgment? Then a set of multi-data fusion algorithms are needed to solve this problem.
Commonly used methods of multi-sensor fusion are divided into two categories: random and artificial intelligence. The AI category mainly includes fuzzy logic reasoning and artificial neural network methods; the stochastic methods mainly include Bayesian filtering, Kalman filtering and other algorithms. At present, automotive fusion sensing mainly uses random fusion algorithms.
The fusion perception algorithm of autonomous vehicles mainly uses the Kalman filter algorithm, which uses linear system state equations to optimally estimate the system state through system input and output observation data. It is an algorithm that currently solves most problems. They are all the best and most efficient methods.
Multiple sensors need to be processed by fusion algorithms. Enterprises will need algorithm engineers in the fusion sensing category to solve the problem of multi-sensor fusion. Most of the job requirements in the fusion sensing category are It is necessary to be able to master the working principles of various sensors and the data characteristics of signals, to be able to master fusion algorithms for software development and sensor calibration algorithm capabilities, as well as point cloud data processing, deep learning detection algorithms, etc.
Slam is called simultaneous positioning and mapping, which assumes that the scene is static In this case, the image sequence is obtained through the movement of the camera and the design of the 3-D structure of the scene is obtained. This is an important task of computer vision. The data obtained by the camera is processed by an algorithm, which is visual slam.
In addition to visual slam, environment-aware positioning methods also include lidar slam, GPS/IMU and high-precision maps. The data obtained by these sensors need to be processed by algorithms to form data results that provide location information basis for autonomous driving decisions. Therefore, if you want to work in the field of environmental perception, you can not only choose the fusion sensing algorithm position, but also choose the slam field.
The above is the detailed content of An article briefly analyzing multi-sensor fusion for autonomous driving. For more information, please follow other related articles on the PHP Chinese website!