Home > Article > Technology peripherals > An article explaining the key technical difficulties of autonomous driving
The Society of Automotive Engineers divides autonomous driving into six levels, L0-L5, based on the degree of vehicle intelligence:
The software and hardware architecture of an autonomous vehicle is shown in Figure 2, which is mainly divided into environmental awareness layers , decision-making and planning layer, control layer and execution layer. The environmental recognition (perception) layer mainly obtains the vehicle's environmental information and vehicle status information through sensors such as lidar, millimeter wave radar, ultrasonic radar, vehicle cameras, night vision systems, GPS, and gyroscopes, specifically including: lane lines detection, traffic light recognition, traffic sign recognition, pedestrian detection, vehicle detection, obstacle recognition and vehicle positioning, etc.; the decision-making and planning layer is divided into task planning, behavior planning and trajectory planning, based on the set route planning and the environment. and the vehicle's own status to plan the next specific driving tasks (lane keeping, lane changing, following, overtaking, collision avoidance, etc.), behaviors (acceleration, deceleration, turning, braking, etc.) and paths (driving trajectories); control layer and execution The layer controls vehicle driving, braking, steering, etc. based on the vehicle dynamics system model, so that the vehicle follows the prescribed driving trajectory.
Autonomous driving technology involves many key technologies. This article mainly introduces environment perception technology, high-precision positioning technology, and decision-making and planning techniques and control and execution techniques.
Environment perception refers to the ability to understand the environment, such as types of obstacles, road signs and markings, Detection of driving vehicles, language classification of traffic information and other data. Positioning is the post-processing of the perception results, which helps the vehicle understand its position relative to its environment through the positioning function. Environmental perception requires obtaining a large amount of surrounding environment information through sensors to ensure a correct understanding of the vehicle's surrounding environment and make corresponding planning and decisions based on this.
Commonly used environment perception sensors for autonomous vehicles include: cameras, lidar, millimeter wave radar, infrared and ultrasonic radar, etc. Cameras are the most commonly used, simplest, and closest to the human eye imaging principle of environment perception sensors for autonomous vehicles. By capturing the environment around the vehicle in real time, CV technology is used to analyze the captured images to achieve functions such as vehicle and pedestrian detection and traffic sign recognition around the vehicle.
The main advantages of the camera are its high resolution and low cost. However, in bad weather such as night, rain, snow, haze, etc., the performance of the camera will decline rapidly. In addition, the viewing distance of the camera is limited and it is not good at long-distance observation.
Millimeter wave radar is also a commonly used sensor for autonomous vehicles. Millimeter wave radar refers to radar that works in the millimeter wave band (wavelength 1-10 mm, frequency domain 30-300GHz). It is based on ToF technology (Time of Flight) to detect target objects. Millimeter wave radar continuously sends millimeter wave signals to the outside world and receives the signal returned by the target. It determines the distance between the target and the vehicle based on the time difference between signal sending and receiving. Therefore, millimeter wave radar is mainly used to avoid collisions between cars and surrounding objects, such as blind spot detection, obstacle avoidance assistance, parking assistance, adaptive cruise, etc. Millimeter-wave radar has strong anti-interference ability, and its ability to penetrate rainfall, sand, dust, smoke and plasma is much stronger than laser and infrared, and it can work all-weather. However, it also has shortcomings such as large signal attenuation, easy to be blocked by buildings, human bodies, etc., short transmission distance, low resolution, and difficulty in imaging.
Lidar also uses ToF technology to determine the target location and distance. LiDAR detects targets by emitting laser beams. Its detection accuracy and sensitivity are higher, and its detection range is wider. However, LiDAR is more susceptible to interference from rain, snow, haze, etc. in the air, and its high cost also restricts its application. main reason. Vehicle-mounted lidar can be divided into single-line, 4-line, 8-line, 16-line and 64-line lidar according to the number of laser beams emitted. You can use the following table (Table 1) to compare the advantages and disadvantages of mainstream sensors.
Autonomous driving environment perception usually uses two technologies: "weak perception and super intelligence" and "strong perception and strong intelligence" route. The "weak perception, super intelligence" technology mainly relies on cameras and deep learning technology to achieve environmental perception, and does not rely on lidar. This technology believes that humans can drive with a pair of eyes, and the car can also rely on cameras to see the surrounding environment clearly. If super intelligence is temporarily difficult to achieve, in order to achieve driverless driving, it is necessary to enhance perception capabilities. This is the so-called "strong perception and strong intelligence" technical route.
Compared with the "weak perception and super intelligence" technology route, the biggest feature of the "strong perception and strong intelligence" technology route is the addition of lidar as a sensor, thereby greatly improving perception capabilities. Tesla adopts the "weak intelligence and super intelligence" technical route, while Google Waymo, Baidu Apollo, Uber, Ford Motor and other artificial intelligence companies, travel companies, and traditional car companies all adopt the "strong perception and strong intelligence" technical route.
The purpose of positioning is to obtain the precise position of an autonomous vehicle relative to the external environment. It is a must for autonomous vehicles. The basis of preparation. When driving on complex urban roads, the positioning accuracy requires an error of no more than 10 cm. For example: Only by accurately knowing the distance between the vehicle and the intersection can we make more accurate predictions and preparations; only by accurately positioning the vehicle can we determine the lane in which the vehicle is located. If the positioning error is high, it may cause a complete traffic accident.
GPS is currently the most widely used positioning method. The higher the GPS accuracy, the more expensive the GPS sensor is. However, the current positioning accuracy of commercial GPS technology is far from enough. Its accuracy is only meter level and is easily interfered by factors such as tunnel obstruction and signal delay. In order to solve this problem, Qualcomm has developed vision-enhanced high-precision positioning (VEPP) technology, which integrates information from multiple automotive components such as GNSS global navigation, cameras, IMU inertial navigation, and wheel speed sensors. mutual calibration and data fusion to achieve global real-time positioning accurate to lane lines.
Decision planning is one of the key parts of autonomous driving. It first fuses multi-sensor information, and then based on Driving needs to make task decisions, and then be able to plan multiple safe paths between two points through some specific constraints while avoiding existing obstacles, and choose an optimal one among these paths. The path, as the vehicle's driving trajectory, is planning. According to the different levels of division, it can be divided into two types: global planning and local planning. Global planning is to plan a collision-free optimal path under specific conditions based on the obtained map information. For example, there are many roads from Shanghai to Beijing. Planning one as a driving route is the overall planning.
Static path planning algorithms such as grid method, visual diagram method, topology method, free space method, neural network method, etc. Local planning is based on global planning and on the basis of some local environmental information, it is a process that can avoid collision with some unknown obstacles and finally reach the target point. For example, there will be other vehicles or obstacles on the globally planned route from Shanghai to Beijing. If you want to avoid these obstacles or vehicles, you need to turn and adjust the lane. This is local path planning. Local path planning methods include: artificial potential field method, vector domain histogram method, virtual force field method, genetic algorithm and other dynamic path planning algorithms.
The decision-making and planning layer is the autonomous driving system. It is a direct reflection of intelligence and plays a decisive role in the driving safety of the vehicle and the entire vehicle. Common decision-making and planning system structures are divided into Layer-progressive, reactive, and a mixture of the two.
The hierarchical progressive architecture is the structure of a series system. In this system, the modules of the intelligent driving system are in clear order, and the output of the previous module is The input to the next module is therefore also called the perceptual planning action structure. However, the reliability of this structure is not high. Once a software or hardware failure occurs in a certain module, the entire information flow will be affected, and the entire system is likely to collapse or even be paralyzed.
The reactive architecture adopts a parallel structure, and the control layer can make decisions directly based on sensor input, so it The generated action is a direct result of the sensory data, which can highlight the characteristics of the perceived action and is suitable for completely unfamiliar environments. Many behaviors in the reactive architecture mainly involve a simple special task, so it feels that planning and control can be closely integrated, and the storage space occupied is not large, so it can produce fast responses and strong real-time performance. At the same time, each One layer only needs to be responsible for a certain behavior of the system. The entire system can conveniently and flexibly realize the transition from low level to high level. Moreover, if one of the modules has an unexpected failure, the remaining layers can still produce meaningful results. Actions, the robustness of the system has been greatly improved. The difficulty is that due to the flexibility of the system to perform actions, a specific coordination mechanism is needed to resolve the conflicts between various control loops and agree on the actuator to obtain meaningful the result of.
The structure of the hierarchical system and the structure of the reactive system have their own advantages and disadvantages. It is difficult to meet the complex and changeable usage requirements of the driving environment alone, so more and more people in the industry are beginning to study hybrid architectures to effectively combine the advantages of both and generate goal-oriented definitions at the level of global planning. Hierarchical hierarchical behavior generates the behavior of a reactive system oriented to target search at the level of local planning.
Control core technology for autonomous driving It is the longitudinal control, lateral control, longitudinal control of the vehicle and the driving and braking control of the vehicle. The lateral control is the adjustment of the steering wheel angle and the control of tire force. By realizing longitudinal and lateral automatic control, you can control the vehicle according to the given target and Constrain the operation of automatic control vehicles.
The longitudinal control of the vehicle is in the direction of the driving speed, that is, the speed of the vehicle and the relationship between the vehicle and the preceding and following vehicles or obstacles Automatic control of object distance. Cruise control and emergency braking control are both typical examples of longitudinal control in autonomous driving. Such control problems can be attributed to the control of motor drives, engines, transmission and braking systems. Various motor-engine-transmission models, vehicle operation models and braking process models are combined with different controller algorithms to form various longitudinal control modes.
The lateral control of the vehicle refers to the control perpendicular to the direction of movement. The goal is to control the vehicle to automatically maintain the desired driving route and achieve good ride comfort and stability under different vehicle speeds, loads, wind resistance, and road conditions. . There are two basic design methods for vehicle lateral control. One is based on driver simulation (one is to use a simpler dynamics model and driver manipulation rules to design the controller; the other is to use the driver's manipulation process The data training controller obtains the control algorithm); the other is a control method that gives the car's lateral motion mechanics model (an accurate car's lateral motion model needs to be established. Typical models are such as the single-track model, which considers the characteristics of the left and right sides of the car to be the same)
In addition to the environment perception, precise positioning, decision planning and control execution introduced above, self-driving cars also involve high-level Key technologies such as precision maps, V2X, and autonomous vehicle testing. Autonomous driving technology is a combination of artificial intelligence, high-performance chips, communication technology, sensor technology, vehicle control technology, big data technology and other multi-field technologies. It is difficult to implement the technology. In addition, for the implementation of autonomous driving technology, it is necessary to establish basic transportation facilities that meet the requirements of autonomous driving and consider laws and regulations regarding autonomous driving.
The above is the detailed content of An article explaining the key technical difficulties of autonomous driving. For more information, please follow other related articles on the PHP Chinese website!