Home >Technology peripherals >AI >A grand view of autonomous driving simulation! Let's talk about the industry of autonomous driving simulation!

A grand view of autonomous driving simulation! Let's talk about the industry of autonomous driving simulation!

PHPz
PHPzforward
2023-10-17 11:17:011280browse

Hello, dear listeners! It’s time for the Simulation Grand View Garden program again! Today I will give you a brief introduction to the autonomous driving simulation industry.

First let’s talk about why autonomous driving requires simulation. A few years ago, when watching If You Are the One, guest Huang Lan said that she would only accept autonomous driving if 2/3 of the people accepted it, which reflected the general public's concern for the safety of autonomous driving. In order to ensure safety, autonomous driving algorithms need to undergo a large number of road tests before they can be truly applied on a large scale. However, the testing of autonomous driving systems is very "expensive": the time and capital costs are huge, so people hope to move as many tests as possible to computer systems, use simulation to expose most of the problems in the autonomous driving system, and reduce on-site road testing demand, therefore, our jobs appeared.

1. Simulation scenario

The simulation scenario is the test case of the autonomous driving system. According to the classification of the China Automotive Technology and Research Center, autonomous driving test scenarios can be divided into four major categories: [natural driving scenarios] [hazardous working conditions scenarios] [standard regulatory scenarios] [parameter reorganization scenarios]: Natural driving scenarios are derived from the real conditions of the car Natural driving state is the most basic data source in constructing autonomous driving test scenarios; hazardous working conditions scenarios mainly include a large number of severe weather environments, complex road traffic, and typical traffic accidents, such as the CIDAS database; standard regulatory scenarios are used to verify the effectiveness of autonomous driving A basic test scenario is to construct a test scenario through existing standards and evaluation procedures, with the purpose of testing the basic capabilities that autonomous vehicles should have; the parameter reorganization scenario is to parameterize and complete existing simulation scenarios. The random generation or automatic reorganization of simulation scenarios has the characteristics of unlimitedness, scalability, batching, and automation.

The scene library construction process can be roughly divided into [collecting data]: that is, actual road data and regulatory data, etc., [processing data]: that is, extracting features from the data and combining them to form scenarios and [application data]: scenarios Library testing and feedback.

At present, the generation of natural driving scenes can be basically automated: the collection vehicle collects data according to a certain format, the algorithm filters the data of key fragments that may be useful, and the algorithm calculates the vehicle and other surrounding vehicles in the fragment data trajectory, and then write the trajectory into a scene description file, such as a scene file in OpenScenario format. Many existing simulation software can directly use the scene file obtained in this way for simulation. It should be noted that in this case, what is restored in the simulation software is only the "logic" of the actual mining scene. The participants in the scene are wearing the vehicle model "vest" in the 3D model library of the simulation software and staged scenes of real life. Behavioral snippets. In other words, the scene restored in this way can certainly satisfy the test of the control algorithm, but it cannot restore the sensor sensing information at the time, because after all, the foreground vehicle and background are still played by the three-dimensional model of the simulation software. Now if you want to restore the sensor sensing information, you can apply NERF.

So, what kind of simulation scenarios are valuable? The restoration of natural driving data collected by road test vehicles is considered to be the closest to real road conditions and highly random. But don’t we say that the current road test takes too long to keep up with the trip? This requires us to process the road test data, extract the identification of traffic participants, and then rearrange and combine them to form a random scene based on real data.

For example, Baidu’s popular paper in 2019 introduced their AADS simulation system: In this system, a car equipped with lidar and binocular cameras is used to scan the street, and all the information of the autonomous driving simulation can be obtained footage, and then automatically decomposes the input footage into background, scene lighting, and foreground objects. Through view synthesis technology, the viewpoint can be changed on a static background to generate real images from any perspective, thereby simulating the movement of a car walking in different environments. So how do you prove the effectiveness of these restructuring scenarios? The paper mentions a method of evaluation by comparing the recognition effects of perception algorithms in virtual scenes and actual scenes. It is also interesting to use the performance of the measured object to evaluate measurement tools. Later, some NERF research applied to autonomous driving also used this set of ideas, such as UniSim.

I personally believe that no matter how effective the natural driving data simulation scene is, it is only suitable for testing some algorithms: no matter how this method is used, the trajectories of surrounding objects are recorded, and there is no way to change them based on the behavior of the car. of. This is like the difference between a movie and a game. The scenes in the movie can only be played, but the game can change the scene based on interaction.

Perhaps in the near future, combined with traffic flow simulation and real data, random scene generation can batch create simulation scenes that are consistent with real traffic conditions and can also change with the behavior of the vehicle.

2. Simulation development

A grand view of autonomous driving simulation! Lets talk about the industry of autonomous driving simulation!

#The scene library we talked about before can be said to be preparing data for autonomous driving simulation testing. , then the simulation development work is to create or improve the tools.

Simulation development probably includes the following aspects:

  1. [Scenario Library]: I have said a lot before, and it will include technical content such as data processing, deep learning, and databases
  2. [Perception]: With the simulation environment, environmental information needs to be transferred to the algorithm, so various sensor models need to be established, such as cameras, lidar, millimeter-wave radar, ultrasonic radar, etc., and physical principle-level models and ideal models must be established as needed. To do a good job in sensor modeling requires theoretical research on the working principle of the sensor, computer modeling and engineering implementation capabilities of physical processes, and the support of a large amount of experimental data.
  3. [Vehicle Dynamics]: The control commands output by the algorithm need to have control objects, so a vehicle dynamics model is required. This is almost another subject. There will be specialized engineers to study the dynamics model and use it in the automatic driving simulation. Need to be able to access professional dynamics models or simplify them.
  4. [Middleware]: Information exchange is required between the algorithm and the simulation platform, and between simulation platforms with different functions, so a large amount of interface development is required. The more commonly used middleware in the autonomous driving research phase is ROS, and the more commonly used middleware in the application phase is AUTOSAR-based middleware.
  5. [Simulation Engine]: Some companies like to develop self-developed simulation platforms, so the physics engine is responsible for motion and collision. Commonly used open source ones are ODE, Bullet, DART, etc., and the rendering engine is responsible for 3D display. Open source Such as OGRE, OpenGL. Unreal and Unity are two sets of engines that are commonly used to make games, both for physics and rendering.
  6. [Simulation Acceleration]: It will involve parallel computing, cloud computing, etc. Automated testing can also be included here.
  7. [Front-end]: I see that many simulation development positions are actually recruiting front-ends, because the dynamics of the simulation may require display interaction, etc.

A grand view of autonomous driving simulation! Lets talk about the industry of autonomous driving simulation!

Finally, I think there may be an 8th point with higher advanced requirements: the ability to "not click anywhere". For example, if your subject is just What about part of the autonomous driving functionality framework? Can you use open source algorithms to fill in the rest and let the "closed loop" run?

3. Simulation Test

With the data and tools required for the autonomous driving simulation test, the next step is the simulation test. Today we mainly introduce several common simulation test links.

  1. [MIL Model-in-the-Loop]: To be honest, I don’t really know the difference between model-in-the-loop and software-in-the-loop (maybe related to the rise of MBSE methodology). In a narrow sense, model-in-the-loop is a test that uses tools such as MATLAB to verify the logical function of the algorithm before writing and compiling the actual code. To put it bluntly, it is to use the simulink model to implement the algorithm and perform simulation.
  2. [SIL software-in-the-loop]: Use the actual compiled code software for testing. It stands to reason that the model-in-the-loop test has passed. SIL only detects whether there are problems in code production. Like HIL, SIL needs to provide a series of operating environments and other pre-virtual signals unrelated to the function under test for the object under test.
  3. [HIL Hardware in the Loop]: Broadly speaking, any method in which a piece of hardware is tested in the loop can be called HIL, so testing a certain sensor can also be called HIL testing. In a narrow sense, we generally refer to the controller hardware-in-the-loop, which uses a real-time computer to run a simulation model to simulate the operating status of the controlled object, connects to the ECU under test through the I/O interface, and conducts all-round, comprehensive monitoring of the ECU under test. System testing. Starting from HIL, simulation testing is required to have strong real-time performance.
  4. [VIL Vehicle-in-the-loop]: I understand that there are generally two methods of vehicle-in-the-loop: one is to install a vehicle equipped with an autonomous driving system on a test bench, and remove the wheels and replace them with the drag of a simulated load The motor and the incentives given to the vehicle by the terrain and road surface are simulated through the test bench. In this way, if a good display system is added, it can also be used as a driver-in-the-loop simulation system; the other is that the vehicle can be driven in an open space When driving in the field, the simulation system provides sensor input, so that although the vehicle is in an empty field, the algorithm will also think that there are various different scenes around it. Generally, the vehicle GPS can be used to provide position and attitude feedback to the simulation system.

4. Daily work

I have said so much in the previous sections, they are all introducing our industry in general, and they are all figured out by me, a blind person. Elephant, this section will talk about what we do every day in general. These daily tasks are of course included in the second and third sections:

  1. [Perception]: It is essential to build a sensor model. You need to pay attention to a series of parameters of each sensor, such as detection distance, detection angle range, resolution, distortion parameters, noise parameters, installation location, etc., and also There are hardware communication protocols, etc. Next, depending on the simulation software tool used, whether to "configure" an existing type of sensor or to develop a new type of sensor based on the simulation software. In order to train or evaluate algorithm models, simulations often need to provide true values, such as 2D/3D bounding boxes, lane lines and other map information, 2D/3D occupancy grids, etc. If the existing functions of the simulation software cannot meet the requirements, it is also necessary to Engineers do secondary development.
  2. [Vehicle Dynamics]: It is necessary to configure the vehicle model in professional dynamics simulation software according to vehicle parameters, and it is also necessary to be able to directly write simplified kinematics and dynamics models based on simplified formulas.
  3. [Middleware]: The development of interfaces is an important part of the work. It is responsible for the "translation" between the object under test and the simulation software; the other is to use the API interface of the software to realize the communication between different levels of simulation platforms. Joint simulation, such as scene simulation combined with vehicle dynamics simulation, plus traffic flow simulation, are then unified into the scheduling of automated test management software.
  4. [Simulation Acceleration]: I also put automated testing into simulation acceleration, because if it can achieve 7x24 hours of uninterrupted testing, it is also a way to improve efficiency! This involves automated calling of the simulation platform, automated script writing, recording data, and evaluating data according to use case requirements.
  5. [Software Development]: Companies that need self-developed simulation software mainly do this business.

Another point 6. [Requirements Analysis]: As a simulation development engineer, you should be the person who knows the tools you use best, so once customers (both internal and external) have new needs , The simulation development engineer should be able to design technical solutions, propose software and hardware requirements and project plans based on the needs and the specific conditions of the object being tested. So sometimes, you have to do both product and project management work.

5. Technology Stack

A grand view of autonomous driving simulation! Lets talk about the industry of autonomous driving simulation!

The word "technology stack" sounds very foreign, but in fact, this position should be mastered Order something. I watched a TV series a long time ago, in which a doctor in the emergency department laughed at himself: We are like snake oil, and other surgeons are the ones who can never change. I have always thought that simulation engineers are like emergency doctors in hospitals. They need to know everything: what algorithm to test, and everything except this algorithm must be prepared, including navigation and positioning, control planning, data processing, parameter calibration, etc. Astronomy and geography, medical divination, astrology, gold painting, judging groups and willows... you don't need to ask for a detailed explanation, and quickly meeting the needs of algorithm testing is the most important.

A grand view of autonomous driving simulation! Lets talk about the industry of autonomous driving simulation!

This so-called "overall view" is the advantage of simulation engineers, but only with a true understanding of the algorithm can we do simulation work that can truly help improve the algorithm. Only then can we go further. I'm going too far, and I'm going to pull it back:

  1. [Code]: Mainly C/Python, but if it involves the front-end display part, I don't understand it. Generally speaking, the requirements are definitely not as high as algorithm development, but if you specialize in simulation software development, it is another matter.
  2. [ROS]: I singled it out because ROS is still an unavoidable part of the field of autonomous driving and robotic algorithm research, and the ROS community provides many ready-made tools.
  3. [Vehicle Dynamics]: You may not need to know as much as a real vehicle engineer, but you need to know the basic principles. In addition, you need to be proficient in various coordinate conversions (this may not include vehicles, but math).
  4. [Sensor Principle]: How do various sensors such as cameras, lidar, and millimeter-wave radar on autonomous vehicles work, what do the output signals look like, and what are the key parameters.
  5. [Map]: You need to understand the file formats used in simulation test scenarios, such as opendrive and openscenario, because sometimes information needs to be extracted from them as input for sensor simulation.

The above is just my personal summary, colleagues are welcome to add here!

A grand view of autonomous driving simulation! Lets talk about the industry of autonomous driving simulation!

For the completeness of the article, I will also briefly introduce some commonly used simulation software on the market in this section (really not an advertisement! Don’t use any that are not on the list) be discouraged).

  1. CarSim/CarMaker: Both of these software are powerful dynamics simulation software and are widely used by OEMs and suppliers around the world. They can also simulate some road scenes.
  2. Vissim/SUMO: Vissim is a world-leading microscopic traffic flow simulation software provided by the German PTV company. Vissim can easily construct various complex traffic environments, and can also simulate the interactive behaviors of motor vehicles, trucks, rail transit and pedestrians in a simulation scene. SUMO is an open source software that can add roads, edit lane connection relationships, process intersection areas, edit signal light timing, etc. through interactive editing.
  3. PreScan: Acquired by Siemens, the main interfaces for creating and testing algorithms include MATLAB and Simulink, which can be used for MIL, SIL and HIL.
  4. VTD: As a commercial software, VTD has strong reliability and comprehensive functions, covering road environment modeling, traffic scene modeling, weather and environment simulation, simple and physically realistic sensor simulation, scene simulation management and high-level Accurate real-time picture rendering, etc. It is not an exaggeration to say that VTD is the most commonly used simulation software among domestic OEMs. It can support the full-cycle development process from SIL to HIL and VIL, and the open modular framework can easily co-simulate with third-party tools and plug-ins.
  5. CARLA/AirSim: Two open source simulation platforms, both developed based on UE, have also launched Unity versions. CARLA can produce scenes and supporting high-precision maps, and supports flexible configuration of sensors and environments. It supports multiple cameras, lidar, GPS and other sensors, and can also adjust the lighting and weather of the environment. Microsoft's AirSim has two modes: drone and vehicle. The functions in the vehicle mode are really lackluster. It is not easy to build environment and vehicle models. The community is not as active as CARLA. It is recommended that AirSim not be included in the recruitment of people to write JD in the future. , not much use. In addition, domestic Sangxin Technology recently launched OASIS developed based on CARLA, which can currently be regarded as an enhanced version of the open source CARLA.
  6. 51SimOne/PanoSim: These two are domestic simulation software, and they can meet the main functions of scene simulation software.

A grand view of autonomous driving simulation! Lets talk about the industry of autonomous driving simulation!

Finally, one more lgsvl: Originally, the advantage of lgsvl is that it is better integrated with Apollo, but I heard that the official lgsvl has given up on this project, so I advise you to abandon this trap.

6. Learning Path

I believe that through my introduction in the first five sections, smart school students can already experience the learning process of becoming an autonomous driving simulation engineer. Path, and by criticizing the content of my first five sections, young colleagues can already draw the way to advance. But in this section I still write down some superficial understanding in this regard.

I have said so much before, I think everyone can see that the simulation of autonomous driving is a multi-disciplinary field that can accept students from many majors, including but not limited to: computer/control/robotics /Machinery/Vehicles/Power Electronics, etc.

In terms of experience and technology, I will try to list some job requirements:

  1. Coding ability: Students who do simulation cloud computing, cloud server and other related development may need to be proficient in using C/Go/Java for development, have good programming habits, and master common design patterns. , data structures and algorithms, familiar with Linux systems, Docker technology and Kubernetes related knowledge, and have experience in cloud service development. These are aimed at a self-developed simulation test platform with high parallelism, high reuse and high automation. In addition, in addition to a solid computer foundation, positions for self-developed simulation software may require game engine development experience, so students who are engaged in game development can also switch to autonomous driving simulation (including technical art). Students whose goal is to use existing simulation software for secondary development and integration may need to be proficient in C/C and Python, and familiar with Linux/ROS development. It would be better if they have experience in the development of automotive-grade middleware such as AUTOSAR.
  2. Software experience: Of course, any actual experience in using autonomous driving simulation software is a plus. However, since most commercial software is very expensive, it relies heavily on the strength of the school laboratory or company in this regard. In the absence of commercial software support, I think CARLA is now the optimal solution for open source software.
  3. Field knowledge: I personally believe that as an autonomous driving simulation engineer, it is impossible to have an in-depth understanding of the autonomous driving algorithm, including all aspects of the principle implementation of the algorithm. Only by better understanding the algorithm can we better understand the algorithm. Do a good job of simulation. In addition, if you are a student who is not a computer major, it is also very important to learn the professional courses in this field well, such as machinery, vehicles, mechanics, electronics, etc. If you are upright, you will be amazing and will always be used.

The current autonomous driving industry is experiencing great fluctuations, but in summary, the main types of companies that can use simulation engineers are the following types of companies: OEMs, which mainly integrate application molding simulation software, but new forces Basically, they have to do self-research; autonomous driving solution providers, that is, Tier 1 of algorithms, may also mostly do self-research and simulation; simulation software companies have just started in this area in China, and they are basically start-ups.

At the end of this section, I will talk about my experience in “switching” from traditional machinery. The school where I graduated with a master's degree has a strong culture of transcoding. Among the Chinese students who entered the mechanical graduate school in my class, about seventy-eight out of ten were engaged in the computer industry after graduation. Due to the relatively loose course selection system, students are encouraged to take as many courses as possible from the School of Computer Science. So in those two years, anointing oil was burned to keep the sundial, and poverty was the norm. But I don’t remember whether I needed to pass the exam to find a job. In a word, how does machinery transform into computers: get half a degree in computer science. In fact, at that time, not only mechanics, but all majors were changing, and it was not just Chinese students, but people all over the world.

However, I realized in hindsight that I was not in the same situation, so I missed the best opportunity for transformation. When it comes to self-study, it is much more difficult: the most important thing is that there is no time, which requires efficient learning materials and methods. Therefore, relatively speaking, online classes are more efficient because there are teachers to guide them. Coursera’s courses are good, but they seem to be relatively expensive. In recent years, there have been more and more open source network resources, but the courses taken are not too refined. After all, computers are the most practical and the easiest to practice. There are also many classic computer books, such as data structures and algorithms, C primer... I have never read any of them. Some things are really lost once they are missed.

In fact, I think one of the easiest ways to transform is to directly engage in computer-related work. The fastest way to increase demand is to solve the learning direction and time problems I mentioned above. But if there is a problem of sub-standard performance due to this, just pretend that I didn’t say anything about it.

7. About NERF

A grand view of autonomous driving simulation! Lets talk about the industry of autonomous driving simulation!

NERF is accompanied by "data closed loop", "large model" and "end-to-end" Together, these emerging buzzwords are "making waves" in the field of autonomous driving. In just a few years, NERF is no longer the simple MLP volume rendering it was when it debuted. There are various carriers for storing spatial information: hash tables, voxel grids, multi-dimensional Gaussian functions... New imaging methods are also emerging in endlessly: U-net , CNN, rasterization... the direction of autonomous driving is only a small application branch of NERF.

When NERF is applied to the direction of autonomous driving simulation, it will mainly face the following problems:

The method of autonomous driving data collection causes the scope of the scene to be "unclosed": outdoor scenes will contain a large number of distant views. This is a great challenge for NERF's spatial information storage; autonomous driving scenes contain a large number of dynamic objects, and NERF needs to be able to handle the separation of dynamic and static objects (or foreground and background); NERF models generally do not have migration capabilities, and each scene may be A separate NERF model needs to be trained, and the training of NERF is still relatively slow, so large-scale application of NERF on autonomous driving data will still have problems.

However, I still look forward to, and at the same time believe, NERF will bring disruptive development to autonomous driving simulation, eventually eliminate the domain gap in simulation’s perception algorithm, and even do more. Judging from the information I have learned, NERF will at least bring about the following breakthroughs:

NERF’s new perspective image synthesis capability can enhance the perception algorithm training data set: it can generate new sensor internal parameters (equivalent to changing the sensor Configuration), external parameters (modified self-vehicle trajectory), pictures, lidar point clouds and other data to provide more training data for the perception algorithm. In this regard, you can refer to research such as StreetSurf and UniSim. When dynamic objects are editable, in the future NERF can generate targeted extreme situations and random situations scenarios to supplement the shortcomings of simple drive tests and WorldSim. If NERF can effectively solve the training reconstruction and real-time rendering of city-level scenes at the same time, then NERF can be completely used as a platform for XIL in-the-loop simulation testing without the problem of sensing data domain gap, and it will also promote end-to-end The development of end-to-end algorithms. In addition, NERF's model can even be put into the game engine as a plug-in (for example, the UE plug-in for 3d Gaussian Splatting has been released), so that NERF's street scene reconstruction can be incorporated into the original WorldSim system. If combined with large models in the AIGC direction, NERF will have more possibilities in generating new scenes: lighting, weather, object appearance and behavior, etc. will be able to be edited arbitrarily.

So as a simulation engineer, I strongly recommend that colleagues pay close attention to the progress of NERF. Although various NERF research projects are still in their early stages, the progress of deep learning has been accelerated by hardware. Faster.

8. Written at the end

I have written so much in a miscellaneous way, but I still have some thoughts at the end.

What are the pitfalls of simulation development? The technical pitfalls are not discussed here, but here are some overall thoughts. That is to be wary of whether you are getting too involved in meaningless work: doing similar projects for different people does not count, completing each project is valuable; if you do not use ready-made tools and have to do self-research in the long run, it will not matter. Calculating, it is valuable to break away from dependence on specific tools; many attempts in R&D that later proved to be unreasonable cannot be counted, and the failure of R&D is also valuable. So what exactly is “meaningless” work? This is a matter of opinion, and I can’t sum it up well.

What else can you do from this position? If you have a deep understanding of the object being measured at work, you may try to switch to an algorithm development position in a certain direction; you can also consider the simulation development of robots and drones.

Needless to say the similarities between mobile robots and autonomous driving, let’s mention drones here. The drone industry is certainly not as big as the automobile industry, but it already has landing sites, such as inspections, aerial photography, surveying and mapping, etc. UAVs also need automatic control algorithms for obstacle avoidance, path planning, etc. The sensors used by UAVs are also similar to those of unmanned vehicles. Therefore, it can be said that simulation tests have something in common: UAVs also need rich vision. Perceptual inputs such as images and radar point clouds require more sophisticated dynamic models and so on.

Students who are interested in learning about robot and drone simulation can start with the open source simulation platform Gazebo (https://classic.gazebosim.org/), which does not require as much computing resources as Nvidia’s Isaac is so tall.

This year is the eleventh year since OSRF became independent from Willow Garage, and the robot operating systems ROS and Gazebo have a development history of more than 20 years. Gazebo has gradually developed from a scientific research tool of a graduate research group to an independent simulation software tool with 11 releases today and 7 releases of the second-generation ignition.

A grand view of autonomous driving simulation! Lets talk about the industry of autonomous driving simulation!

Gazebo supports ODE, Bullet and other physics engines. Using OGRE as the rendering engine, it can create a three-dimensional environment and simulate information from various sensors such as cameras and lidar. It has rich Robot models: from robotic arms to wheeled robots to humanoid robots. More importantly, Gazebo naturally provides comprehensive support for algorithms under the ROS platform: after all, if you download and install a desktop full ROS version, Gazebo comes with it. Of course, as an open source software, Gazebo only provides a starting point. Its functions are balanced, but it is rough and not deep enough in all aspects. But just like Taizu Changquan, Qiao Feng will still be different when he uses it in Juxian Village.

I came into contact with Gazebo when I was in school. Later, I worked in robot simulation and used Gazebo until I changed my career to autonomous driving. It's like Gazebo and I were classmates. We were young and ignorant at that time. After working, she and I met again and decided to renew our relationship. We had been inseparable for more than two years. Now that we are over 30 years old, I left her a message: I want to have a better development, so I will leave her... Now I will only say one thing when I say goodbye: Long time no see...

A grand view of autonomous driving simulation! Lets talk about the industry of autonomous driving simulation!

Original link: https://mp.weixin.qq.com/s/_bOe_g3mqoobJUbFS3SNWg

The above is the detailed content of A grand view of autonomous driving simulation! Let's talk about the industry of autonomous driving simulation!. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete