Home >Technology peripherals >AI >A grand view of autonomous driving simulation! Let's talk about the industry of autonomous driving simulation!
Hello, dear listeners! It’s time for the Simulation Grand View Garden program again! Today I will give you a brief introduction to the autonomous driving simulation industry.
First let’s talk about why autonomous driving requires simulation. A few years ago, when watching If You Are the One, guest Huang Lan said that she would only accept autonomous driving if 2/3 of the people accepted it, which reflected the general public's concern for the safety of autonomous driving. In order to ensure safety, autonomous driving algorithms need to undergo a large number of road tests before they can be truly applied on a large scale. However, the testing of autonomous driving systems is very "expensive": the time and capital costs are huge, so people hope to move as many tests as possible to computer systems, use simulation to expose most of the problems in the autonomous driving system, and reduce on-site road testing demand, therefore, our jobs appeared.
The simulation scenario is the test case of the autonomous driving system. According to the classification of the China Automotive Technology and Research Center, autonomous driving test scenarios can be divided into four major categories: [natural driving scenarios] [hazardous working conditions scenarios] [standard regulatory scenarios] [parameter reorganization scenarios]: Natural driving scenarios are derived from the real conditions of the car Natural driving state is the most basic data source in constructing autonomous driving test scenarios; hazardous working conditions scenarios mainly include a large number of severe weather environments, complex road traffic, and typical traffic accidents, such as the CIDAS database; standard regulatory scenarios are used to verify the effectiveness of autonomous driving A basic test scenario is to construct a test scenario through existing standards and evaluation procedures, with the purpose of testing the basic capabilities that autonomous vehicles should have; the parameter reorganization scenario is to parameterize and complete existing simulation scenarios. The random generation or automatic reorganization of simulation scenarios has the characteristics of unlimitedness, scalability, batching, and automation.
The scene library construction process can be roughly divided into [collecting data]: that is, actual road data and regulatory data, etc., [processing data]: that is, extracting features from the data and combining them to form scenarios and [application data]: scenarios Library testing and feedback.
At present, the generation of natural driving scenes can be basically automated: the collection vehicle collects data according to a certain format, the algorithm filters the data of key fragments that may be useful, and the algorithm calculates the vehicle and other surrounding vehicles in the fragment data trajectory, and then write the trajectory into a scene description file, such as a scene file in OpenScenario format. Many existing simulation software can directly use the scene file obtained in this way for simulation. It should be noted that in this case, what is restored in the simulation software is only the "logic" of the actual mining scene. The participants in the scene are wearing the vehicle model "vest" in the 3D model library of the simulation software and staged scenes of real life. Behavioral snippets. In other words, the scene restored in this way can certainly satisfy the test of the control algorithm, but it cannot restore the sensor sensing information at the time, because after all, the foreground vehicle and background are still played by the three-dimensional model of the simulation software. Now if you want to restore the sensor sensing information, you can apply NERF.
So, what kind of simulation scenarios are valuable? The restoration of natural driving data collected by road test vehicles is considered to be the closest to real road conditions and highly random. But don’t we say that the current road test takes too long to keep up with the trip? This requires us to process the road test data, extract the identification of traffic participants, and then rearrange and combine them to form a random scene based on real data.
For example, Baidu’s popular paper in 2019 introduced their AADS simulation system: In this system, a car equipped with lidar and binocular cameras is used to scan the street, and all the information of the autonomous driving simulation can be obtained footage, and then automatically decomposes the input footage into background, scene lighting, and foreground objects. Through view synthesis technology, the viewpoint can be changed on a static background to generate real images from any perspective, thereby simulating the movement of a car walking in different environments. So how do you prove the effectiveness of these restructuring scenarios? The paper mentions a method of evaluation by comparing the recognition effects of perception algorithms in virtual scenes and actual scenes. It is also interesting to use the performance of the measured object to evaluate measurement tools. Later, some NERF research applied to autonomous driving also used this set of ideas, such as UniSim.
I personally believe that no matter how effective the natural driving data simulation scene is, it is only suitable for testing some algorithms: no matter how this method is used, the trajectories of surrounding objects are recorded, and there is no way to change them based on the behavior of the car. of. This is like the difference between a movie and a game. The scenes in the movie can only be played, but the game can change the scene based on interaction.
Perhaps in the near future, combined with traffic flow simulation and real data, random scene generation can batch create simulation scenes that are consistent with real traffic conditions and can also change with the behavior of the vehicle.
#The scene library we talked about before can be said to be preparing data for autonomous driving simulation testing. , then the simulation development work is to create or improve the tools.
Simulation development probably includes the following aspects:
Finally, I think there may be an 8th point with higher advanced requirements: the ability to "not click anywhere". For example, if your subject is just What about part of the autonomous driving functionality framework? Can you use open source algorithms to fill in the rest and let the "closed loop" run?
With the data and tools required for the autonomous driving simulation test, the next step is the simulation test. Today we mainly introduce several common simulation test links.
I have said so much in the previous sections, they are all introducing our industry in general, and they are all figured out by me, a blind person. Elephant, this section will talk about what we do every day in general. These daily tasks are of course included in the second and third sections:
Another point 6. [Requirements Analysis]: As a simulation development engineer, you should be the person who knows the tools you use best, so once customers (both internal and external) have new needs , The simulation development engineer should be able to design technical solutions, propose software and hardware requirements and project plans based on the needs and the specific conditions of the object being tested. So sometimes, you have to do both product and project management work.
The word "technology stack" sounds very foreign, but in fact, this position should be mastered Order something. I watched a TV series a long time ago, in which a doctor in the emergency department laughed at himself: We are like snake oil, and other surgeons are the ones who can never change. I have always thought that simulation engineers are like emergency doctors in hospitals. They need to know everything: what algorithm to test, and everything except this algorithm must be prepared, including navigation and positioning, control planning, data processing, parameter calibration, etc. Astronomy and geography, medical divination, astrology, gold painting, judging groups and willows... you don't need to ask for a detailed explanation, and quickly meeting the needs of algorithm testing is the most important.
This so-called "overall view" is the advantage of simulation engineers, but only with a true understanding of the algorithm can we do simulation work that can truly help improve the algorithm. Only then can we go further. I'm going too far, and I'm going to pull it back:
The above is just my personal summary, colleagues are welcome to add here!
For the completeness of the article, I will also briefly introduce some commonly used simulation software on the market in this section (really not an advertisement! Don’t use any that are not on the list) be discouraged).
Finally, one more lgsvl: Originally, the advantage of lgsvl is that it is better integrated with Apollo, but I heard that the official lgsvl has given up on this project, so I advise you to abandon this trap.
I believe that through my introduction in the first five sections, smart school students can already experience the learning process of becoming an autonomous driving simulation engineer. Path, and by criticizing the content of my first five sections, young colleagues can already draw the way to advance. But in this section I still write down some superficial understanding in this regard.
I have said so much before, I think everyone can see that the simulation of autonomous driving is a multi-disciplinary field that can accept students from many majors, including but not limited to: computer/control/robotics /Machinery/Vehicles/Power Electronics, etc.
In terms of experience and technology, I will try to list some job requirements:
The current autonomous driving industry is experiencing great fluctuations, but in summary, the main types of companies that can use simulation engineers are the following types of companies: OEMs, which mainly integrate application molding simulation software, but new forces Basically, they have to do self-research; autonomous driving solution providers, that is, Tier 1 of algorithms, may also mostly do self-research and simulation; simulation software companies have just started in this area in China, and they are basically start-ups.
At the end of this section, I will talk about my experience in “switching” from traditional machinery. The school where I graduated with a master's degree has a strong culture of transcoding. Among the Chinese students who entered the mechanical graduate school in my class, about seventy-eight out of ten were engaged in the computer industry after graduation. Due to the relatively loose course selection system, students are encouraged to take as many courses as possible from the School of Computer Science. So in those two years, anointing oil was burned to keep the sundial, and poverty was the norm. But I don’t remember whether I needed to pass the exam to find a job. In a word, how does machinery transform into computers: get half a degree in computer science. In fact, at that time, not only mechanics, but all majors were changing, and it was not just Chinese students, but people all over the world.
However, I realized in hindsight that I was not in the same situation, so I missed the best opportunity for transformation. When it comes to self-study, it is much more difficult: the most important thing is that there is no time, which requires efficient learning materials and methods. Therefore, relatively speaking, online classes are more efficient because there are teachers to guide them. Coursera’s courses are good, but they seem to be relatively expensive. In recent years, there have been more and more open source network resources, but the courses taken are not too refined. After all, computers are the most practical and the easiest to practice. There are also many classic computer books, such as data structures and algorithms, C primer... I have never read any of them. Some things are really lost once they are missed.
In fact, I think one of the easiest ways to transform is to directly engage in computer-related work. The fastest way to increase demand is to solve the learning direction and time problems I mentioned above. But if there is a problem of sub-standard performance due to this, just pretend that I didn’t say anything about it.
NERF is accompanied by "data closed loop", "large model" and "end-to-end" Together, these emerging buzzwords are "making waves" in the field of autonomous driving. In just a few years, NERF is no longer the simple MLP volume rendering it was when it debuted. There are various carriers for storing spatial information: hash tables, voxel grids, multi-dimensional Gaussian functions... New imaging methods are also emerging in endlessly: U-net , CNN, rasterization... the direction of autonomous driving is only a small application branch of NERF.
When NERF is applied to the direction of autonomous driving simulation, it will mainly face the following problems:
The method of autonomous driving data collection causes the scope of the scene to be "unclosed": outdoor scenes will contain a large number of distant views. This is a great challenge for NERF's spatial information storage; autonomous driving scenes contain a large number of dynamic objects, and NERF needs to be able to handle the separation of dynamic and static objects (or foreground and background); NERF models generally do not have migration capabilities, and each scene may be A separate NERF model needs to be trained, and the training of NERF is still relatively slow, so large-scale application of NERF on autonomous driving data will still have problems.
However, I still look forward to, and at the same time believe, NERF will bring disruptive development to autonomous driving simulation, eventually eliminate the domain gap in simulation’s perception algorithm, and even do more. Judging from the information I have learned, NERF will at least bring about the following breakthroughs:
NERF’s new perspective image synthesis capability can enhance the perception algorithm training data set: it can generate new sensor internal parameters (equivalent to changing the sensor Configuration), external parameters (modified self-vehicle trajectory), pictures, lidar point clouds and other data to provide more training data for the perception algorithm. In this regard, you can refer to research such as StreetSurf and UniSim. When dynamic objects are editable, in the future NERF can generate targeted extreme situations and random situations scenarios to supplement the shortcomings of simple drive tests and WorldSim. If NERF can effectively solve the training reconstruction and real-time rendering of city-level scenes at the same time, then NERF can be completely used as a platform for XIL in-the-loop simulation testing without the problem of sensing data domain gap, and it will also promote end-to-end The development of end-to-end algorithms. In addition, NERF's model can even be put into the game engine as a plug-in (for example, the UE plug-in for 3d Gaussian Splatting has been released), so that NERF's street scene reconstruction can be incorporated into the original WorldSim system. If combined with large models in the AIGC direction, NERF will have more possibilities in generating new scenes: lighting, weather, object appearance and behavior, etc. will be able to be edited arbitrarily.
So as a simulation engineer, I strongly recommend that colleagues pay close attention to the progress of NERF. Although various NERF research projects are still in their early stages, the progress of deep learning has been accelerated by hardware. Faster.
I have written so much in a miscellaneous way, but I still have some thoughts at the end.
What are the pitfalls of simulation development? The technical pitfalls are not discussed here, but here are some overall thoughts. That is to be wary of whether you are getting too involved in meaningless work: doing similar projects for different people does not count, completing each project is valuable; if you do not use ready-made tools and have to do self-research in the long run, it will not matter. Calculating, it is valuable to break away from dependence on specific tools; many attempts in R&D that later proved to be unreasonable cannot be counted, and the failure of R&D is also valuable. So what exactly is “meaningless” work? This is a matter of opinion, and I can’t sum it up well.
What else can you do from this position? If you have a deep understanding of the object being measured at work, you may try to switch to an algorithm development position in a certain direction; you can also consider the simulation development of robots and drones.
Needless to say the similarities between mobile robots and autonomous driving, let’s mention drones here. The drone industry is certainly not as big as the automobile industry, but it already has landing sites, such as inspections, aerial photography, surveying and mapping, etc. UAVs also need automatic control algorithms for obstacle avoidance, path planning, etc. The sensors used by UAVs are also similar to those of unmanned vehicles. Therefore, it can be said that simulation tests have something in common: UAVs also need rich vision. Perceptual inputs such as images and radar point clouds require more sophisticated dynamic models and so on.
Students who are interested in learning about robot and drone simulation can start with the open source simulation platform Gazebo (https://classic.gazebosim.org/), which does not require as much computing resources as Nvidia’s Isaac is so tall.
This year is the eleventh year since OSRF became independent from Willow Garage, and the robot operating systems ROS and Gazebo have a development history of more than 20 years. Gazebo has gradually developed from a scientific research tool of a graduate research group to an independent simulation software tool with 11 releases today and 7 releases of the second-generation ignition.
Gazebo supports ODE, Bullet and other physics engines. Using OGRE as the rendering engine, it can create a three-dimensional environment and simulate information from various sensors such as cameras and lidar. It has rich Robot models: from robotic arms to wheeled robots to humanoid robots. More importantly, Gazebo naturally provides comprehensive support for algorithms under the ROS platform: after all, if you download and install a desktop full ROS version, Gazebo comes with it. Of course, as an open source software, Gazebo only provides a starting point. Its functions are balanced, but it is rough and not deep enough in all aspects. But just like Taizu Changquan, Qiao Feng will still be different when he uses it in Juxian Village.
I came into contact with Gazebo when I was in school. Later, I worked in robot simulation and used Gazebo until I changed my career to autonomous driving. It's like Gazebo and I were classmates. We were young and ignorant at that time. After working, she and I met again and decided to renew our relationship. We had been inseparable for more than two years. Now that we are over 30 years old, I left her a message: I want to have a better development, so I will leave her... Now I will only say one thing when I say goodbye: Long time no see...
Original link: https://mp.weixin.qq.com/s/_bOe_g3mqoobJUbFS3SNWg
The above is the detailed content of A grand view of autonomous driving simulation! Let's talk about the industry of autonomous driving simulation!. For more information, please follow other related articles on the PHP Chinese website!