Home > Article > Technology peripherals > An article explaining the testing technology of intelligent driving perception system in detail
With the advancement of artificial intelligence and its software and hardware technology, autonomous driving has developed rapidly in recent years. Autonomous driving systems have been used in civilian vehicle driver assistance systems, autonomous logistics robots, drones and other fields. The perception component is the core of the autonomous driving system, which enables the vehicle to analyze and understand information about the internal and external traffic environment. However, like other software systems, autonomous driving perception systems are plagued by software flaws. Moreover, the autonomous driving system operates in safety-critical scenarios, and its software defects may lead to catastrophic consequences. In recent years, there have been many fatalities and injuries caused by defects in autonomous driving systems. Autonomous driving system testing technology has received widespread attention from academia and industry. Enterprises and research institutions have proposed a series of technologies and environments including virtual simulation testing, real-life road testing and combined virtual and real testing. However, due to the particularity of input data types and the diversity of operating environments of autonomous driving systems, the implementation of this type of testing technology requires excessive resources and entails greater risks. This article briefly analyzes the current research and application status of autonomous driving perception system testing methods.
The quality assurance of the automatic driving perception system is becoming more and more important. The perception system needs to help vehicles automatically analyze and understand road condition information. Its composition is very complex, and it is necessary to fully test the reliability and safety of the system to be tested in many traffic scenarios. Current autonomous driving perception tests are mainly divided into three categories. No matter what kind of testing method, it shows an important feature that is different from traditional testing, that is, strong dependence on test data.
The first type of testing is mainly based on software engineering theory and formal methods, etc., and takes the model structure mechanism of the perception system implementation as the entry point. This testing method is based on a high-level understanding of the operating mechanism and system characteristics of autonomous driving perception. The purpose of this biased perception system logic test is to discover the design flaws of the perception module in the early stages of system development to ensure the effectiveness of the model algorithm in early system iterations. Based on the characteristics of the autonomous driving algorithm model, the researchers proposed a series of test data generation, test verification indicators, test evaluation methods and technologies.
The second type of testing virtual simulation method uses computers to abstract the actual traffic system to complete testing tasks, including system testing in a preset virtual environment or independent testing of perception components. The effect of virtual simulation testing depends on the reality of the virtual environment, test data quality and specific test execution technology. It is necessary to fully consider the effectiveness of the simulation environment construction method, data quality assessment and test verification technology. Autonomous driving environment perception and scene analysis models rely on large-scale effective traffic scene data for training and testing verification. Domestic and foreign researchers have conducted a lot of research on traffic scenes and their data structure generation technology. Use methods such as data mutation, simulation engine generation, and game model rendering to construct virtual test scene data to obtain high-quality test data, and use different generated test data for autonomous driving models and data amplification and enhancement. Test scenarios and data generation are key technologies. Test cases must be rich enough to cover the state space of the test sample. Test samples need to be generated under extreme traffic conditions to test the safety of the system's decision output model under these boundary use cases. Virtual testing often combines existing testing theories and technologies to construct effective methods for evaluating and verifying test effects.
The third category is road testing of real vehicles equipped with autonomous driving perception systems, including preset closed scene testing and actual road condition testing. The advantage of this type of testing is that testing in a real environment can fully guarantee the validity of the results. However, this type of method has difficulties such as test scenarios that are difficult to meet diverse needs, difficulty in obtaining relevant traffic scene data samples, high cost of manual annotation of real road collection data, uneven annotation quality, excessive test mileage requirements, and too long data collection cycle, etc. . There are safety risks in manual driving in dangerous scenarios, and it is difficult for testers to solve these problems in the real world. At the same time, traffic scene data also suffers from problems such as a single data source and insufficient data diversity, which is insufficient to meet the testing and verification requirements of autonomous driving researchers in software engineering. Despite this, road testing is an indispensable part of traditional car testing and is extremely important in autonomous driving perception testing.
From the perspective of test types, perception system testing has different test contents for the vehicle development life cycle. Autonomous driving testing can be divided into model-in-the-loop (MiL) testing, software-in-the-loop (SiL) testing, hardware-in-the-loop (HiL) testing, vehicle-in-the-loop (ViL) testing, etc. This article focuses on the SiL and HiL related parts of the autonomous driving perception system test. HiL includes perception hardware devices, such as cameras, lidar, and human-computer interaction perception modules. SiL uses software simulation to replace the data generated by the real hardware. The purpose of both tests is to verify the functionality, performance, robustness and reliability of the autonomous driving system. For specific test objects, different types of tests are combined with different testing technologies at each perception system development stage to complete the corresponding verification requirements. Current autonomous driving perception information mainly comes from the analysis of several types of main data, including image (camera), point cloud (lidar), and fusion perception systems. This article mainly analyzes the perception tests of these three types of data.
Images collected by multiple types of cameras are one of the most important input data types for autonomous driving perception. Image data can provide front-view, surround-view, rear-view and side-view environmental information when the vehicle is running, and help the autonomous driving system achieve functions such as road ranging, target recognition and tracking, and automatic lane change analysis. Image data comes in various formats, such as RGB images, semantic images, depth images, etc. These image formats have their own characteristics. For example, RGB images have richer color information, depth-of-field images contain more scene depth information, and semantic images are obtained based on pixel classification, which is more beneficial for target detection and tracking tasks.
Image-based automatic driving perception system testing relies on large-scale effective traffic scene images for training and testing verification. However, the cost of manual labeling of real road collection data is high, the data collection cycle is too long, laws and regulations for manual driving in dangerous scenes are imperfect, and the quality of labeling is uneven. At the same time, traffic scene data is also affected by factors such as a single data source and insufficient data diversity, which is insufficient to meet the testing and verification requirements of autonomous driving research.
Domestic and foreign researchers have conducted a lot of research on the construction and generation technology of traffic scene data, using methods such as data mutation, adversarial generation network, simulation engine generation and game model rendering to build virtual tests. Scenario data to obtain high-quality test data, and use different generated test data for autonomous driving models and data enhancement. Using hard-coded image transformations to generate test images is an effective method. A variety of mathematical transformations and image processing techniques can be used to mutate the original image to test the potential erroneous behavior of the autonomous driving system under different environmental conditions.
Zhang et al. used an adversarial generative network-based method for image style transformation to simulate vehicle driving scenes under specified environmental conditions. Some studies perform autonomous driving tests in virtual environments, using 3D models from physical simulation models to construct traffic scenes and rendering them into 2D images as input to the perception system. Test images can also be generated by synthesis, sampling modifiable content in the subspace of low-dimensional images and performing image synthesis. Compared with direct mutation of images, the synthetic scene is richer and the image perturbation operation is more free. Fremont et al. used the autonomous driving domain-specific programming language Scenic to pre-design test scenarios, used a game engine interface to generate specific traffic scene images, and used the rendered images for training and verification on the target detection model.
Pei et al. used the idea of differential testing to find inconsistent outputs of the autonomous driving steering model, and also proposed using neuron coverage, that is, the neurons in the neural network exceed the preset given activation threshold. ratio to measure the effectiveness of the test sample. On the basis of neural coverage, researchers have also proposed many new test coverage concepts, such as neuron boundary coverage, strong neuron coverage, hierarchical neuron coverage, etc. In addition, using heuristic search technology to find target test cases is also an effective method. The core difficulty lies in designing test evaluation indicators to guide the search. There are common problems in autonomous driving image system testing such as lack of labeled data for special driving scenarios. This team proposed an adaptive deep neural network test case selection method ATS, inspired by the idea of adaptive random testing in the field of software testing, to solve the high human resource cost of deep neural network test data labeling in the autonomous driving perception system. This problem.
Lidar is a crucial sensor for the autonomous driving system and can determine the distance between the sensor transmitter and the target object. propagation distance, and analyze information such as the amount of reflected energy on the surface of the target object, the amplitude, frequency and phase of the reflected wave spectrum. The point cloud data collected accurately depicts the three-dimensional scale and reflection intensity information of various objects in the driving scene, which can make up for the camera's lack of data form and accuracy. Lidar plays an important role in tasks such as autonomous driving target detection and positioning mapping, and cannot be replaced by single vision alone.
As a typical complex intelligent software system, autonomous driving takes the surrounding environment information captured by lidar as input, and makes judgments through the artificial intelligence model in the perception module, and is controlled by system planning Finally, complete various driving tasks. Although the high complexity of the artificial intelligence model gives the autonomous driving system the perception capability, the existing traditional testing technology relies on the manual collection and annotation of point cloud data, which is costly and inefficient. On the other hand, point cloud data is disordered, lacks obvious color information, is easily interfered by weather factors, and the signal is easily attenuated, making the diversity of point cloud data particularly important during the testing process.
Testing of autonomous driving systems based on lidar is still in its preliminary stages. Both actual drive tests and simulation tests have problems such as high cost, low test efficiency, and unguaranteed test adequacy. In view of the problems faced by autonomous driving systems such as changeable test scenarios, large and complex software systems, and huge testing costs, being able to propose test data generation technology based on domain knowledge is of great significance to the guarantee of autonomous driving systems.
In terms of radar point cloud data generation, Sallab et al. modeled radar point cloud data by building a cycle consistency generative adversarial network, and conducted feature analysis on the simulated data to generate new Point cloud data. Yue et al. proposed a point cloud data generation framework for autonomous driving scenes. This framework accurately mutates the point cloud data in the game scene based on annotated objects to obtain new data. The mutation they obtained using this method The data retrained the point cloud data processing module of the autonomous driving system and achieved better accuracy improvements.
This team designed and implemented a lidar automated testing tool LiRTest, which is mainly used for automated testing of autonomous vehicle target detection systems, and can be further retrained to improve system robustness. . LiRTest first designs physical and geometric models by domain experts, and then constructs transformation operators based on the models. Developers select point cloud seeds from real-world data, use point cloud processing units to identify and process them, and implement transformation operator-based mutation algorithms to generate tests that evaluate the robustness of autonomous driving 3D target detection models. data. Finally, LiRTest gets the test report and gives feedback on the operator design, thereby iteratively improving quality.
The autonomous driving system is a typical information-physical fusion system. Its operating status is not only determined by user input information and the internal status of the software system, but is also affected by the physical environment. Although there is currently a small amount of research focusing on the point cloud data generation problem affected by various environmental factors, due to the characteristics of point cloud data, the authenticity of the generated data is difficult to equate with the drive test data. Therefore, how to do this without significantly increasing additional resource consumption? In this case, automatically generating point cloud data that can describe a variety of real environmental factors is a key issue that needs to be solved.
In the common software architecture of autonomous driving software, artificial intelligence models have an extremely important impact on driving decisions and system behaviors. The functions they affect include: object recognition, path planning, behavior Forecast etc. The most commonly used artificial intelligence model for point cloud data processing is the target detection model, which is implemented using deep neural networks. Although this technology can achieve high accuracy on specific tasks, due to the lack of interpretability of its results, users and developers cannot analyze and confirm its behavior, which brings difficulties to the development of testing technology and the evaluation of test adequacy. great difficulty. These are all challenges that future lidar model testers will need to face.
Autonomous driving systems are usually equipped with a variety of sensors to sense environmental information, and are equipped with a variety of software and algorithms to complete various autonomous driving tasks. Different sensors have different physical characteristics and their application scenarios are also different. Fusion sensing technology can make up for the poor environmental adaptability of a single sensor, and ensure the normal operation of the autonomous driving system under various environmental conditions through the cooperation of multiple sensors.
Due to different information recording methods, there is strong complementarity between different types of sensors. The installation cost of the camera is low, and the image data collected has high resolution and rich visual information such as color and texture. However, the camera is sensitive to the environment and may be unreliable at night, during strong light, and other light changes. LiDAR, on the other hand, is not easily affected by changes in light and provides precise three-dimensional perception during the day and night. However, lidar is expensive and the point cloud data collected lacks color information, making it difficult to identify targets without obvious shapes. How to utilize the advantages of each modal data and mine deeper semantic information has become an important issue in fused sensing technology.
Researchers have proposed a variety of data fusion methods. Fusion sensing technology of lidar and cameras based on deep learning has become a major research direction due to its high accuracy. Feng et al. briefly summarized the fusion methods into three types: early stage, mid stage and late stage fusion. Early fusion only fuses original data or preprocessed data; mid-stage fusion cross-fuses the data features extracted by each branch; late fusion only fuses the final output results of each branch. Although deep learning-based fused sensing technology has demonstrated great potential in existing benchmark datasets, such intelligent models may still exhibit incorrect and unexpected extreme behaviors in real-world scenarios with complex environments, leading to fatal loss. To ensure the safety of autonomous driving systems, such fused perception models need to be thoroughly tested.
Currently, fusion sensing testing technology is still in its preliminary stage. The test input domain is huge and the cost of data collection is high. The main problems are automated test data generation technology. Therefore, automated test data generation technology has received widespread attention. Wang et al. proposed a cross-modal data enhancement algorithm that inserts virtual objects into images and point clouds according to geometric consistency rules to generate test data sets. Zhang et al. proposed a multimodal data enhancement method that utilizes multimodal transformation flows to maintain the correct mapping between point clouds and image pixels, and based on this, further proposed a multimodal cut and paste enhancement method.
Considering the impact of complex environments in real scenes on sensors, our team designed a data amplification technology for multi-modal fusion sensing systems. This method involves domain experts formulating a set of mutation rules with realistic semantics for each modal data, and automatically generating test data to simulate various factors that interfere with sensors in real scenarios, and help software developers test and Evaluating fused sensing systems. The mutation operators used in this method include three categories: signal noise operators, signal alignment operators and signal loss operators, which simulate different types of interference existing in real scenes. The noise operator refers to the presence of noise in the collected data due to the influence of environmental factors during the sensor data collection process. For example, for image data, operators such as spot and blur are used to simulate the situation when the camera encounters strong light and shakes. The alignment operator simulates the misalignment of multimodal data modes, specifically including time misalignment and space misalignment. For the former, one signal is randomly delayed to simulate transmission congestion or delay. For the latter, minor adjustments are made to the calibration parameters of each sensor to simulate slight changes in sensor position due to vehicle jitter and other issues while the vehicle is traveling. The signal loss operator simulates sensor failure. Specifically, after randomly discarding one signal, observe whether the fusion algorithm can respond in time or work normally.
In short, multi-sensor fusion perception technology is an inevitable trend in the development of autonomous driving. Complete testing is a necessary condition to ensure that the system can work normally in a complex real environment. How to use limited resources Adequate testing within the network remains a pressing issue.
Autonomous driving perception testing is being closely integrated with the autonomous driving software development process, and various in-the-loop tests will gradually become a necessary component of autonomous driving quality assurance. In industrial applications, actual drive testing remains important. However, there are problems such as excessive cost, insufficient efficiency, and high safety hazards, which are far from meeting the testing and verification needs of autonomous driving intelligent perception systems. The rapid development of formal methods and simulation virtual testing in multiple branches of research provides effective ways to improve testing. Researchers are exploring model testing indicators and technologies suitable for intelligent driving to provide support for virtual simulation testing methods. This team is committed to researching the generation, evaluation and optimization methods of autonomous driving perception test data, focusing on in-depth research on three aspects based on images, point cloud data and perception fusion testing to ensure a high-quality autonomous driving perception system.
The above is the detailed content of An article explaining the testing technology of intelligent driving perception system in detail. For more information, please follow other related articles on the PHP Chinese website!