


01 Integrated positioning: key design elements for future-oriented intelligent driving
In the era of intelligent driving, automobiles are transforming the new future. Automotive software and hardware, internal architecture, industry competition pattern, and value distribution in the industrial chain will also undergo profound changes. Under this wave of change, we believe that intelligent driving will successively go through three stages: increased utilization of assisted driving, mature autonomous driving solutions, and improved autonomous driving ecology, and will bring three waves of opportunities for hardware, software systems, and commercial operations respectively.
Among them, HD Map (high definition map), as a key factor in navigation and positioning, will also undergo major design changes. This is mainly reflected in the following important aspects:
- High-precision map and navigation map
The navigation map provides the length of a lane and the approximate road conditions of the relevant journey. High-precision maps provide very detailed road conditions. Such as road signs, slope, lane lines and the location of lane lines. These will be marked on the HD map. In the high-precision map, even the location of a certain traffic light is marked with high-precision GPS data. Therefore, when an unmanned vehicle is driving on the road, as long as there are paths made in the global path planning, and then these paths are converted into paths at the lane line level, the unmanned vehicle can follow each path marked on the high-definition map. Driving on the center line of each lane.
- The connection between high-precision map and other modules
High-precision map and unmanned Other modules of the car are connected, including positioning, prediction, perception, planning, safety, simulation, control, and human-computer interaction, all of which require the help of high-precision maps. It’s not that some modules cannot implement these functions without high-precision maps, but with the help of high-precision maps, they can obtain more accurate information and make decisions that are more suitable for the traffic conditions at that time. I won’t elaborate on the more detailed technical stuff here, just an explanation of their general ideas
- High-precision map and positioning
The main role of high-precision maps in positioning is that it provides information about static objects that have been determined for positioning. Then the unmanned vehicle can in turn find its relative position in the entire map based on the information of these static objects. If these static objects have their own high-precision latitude and longitude coordinates, the unmanned vehicle can reversely find its own latitude and longitude coordinates based on these latitude and longitude coordinates, thereby realizing sensor fusion based on high-precision maps and lidar cameras. Positioning method. This way you can get rid of relying on GPS data. Because GPS data has very strong noise when data is blocked. Of course, at this stage, the positioning method based on sensor fusion of radar and vision is not as accurate as the data provided by differential GPS, but it is still a positioning method. After all, when there is no GPS signal, the vehicle cannot drive without its own positioning information. At this time, it can only rely on other positioning methods.
- High-precision map and decision-making
The relationship between high-precision map and decision-making module It's even simpler. Because if the vehicle knows the route it wants to take in the future and the road signs and traffic lights and road information related to the route it wants to take, the decision-making module can make decisions that better match the current road conditions. It is equivalent to if we know what will happen in the future, we can adjust our current behavior in time to deal with what will happen in the future.
- High-precision map and simulation module
The relationship between high-definition map and simulation module is It’s easier to understand because as long as we position the vehicle or verify other algorithms on a map with high-precision map standards, the information obtained by the vehicle in actual applications is the same as the information we obtain in simulation. In other words, the code we build in the simulation environment can also be used in the real environment to a large extent.
- High-precision map and perception module
##The perception module in driverless driving is a relatively complex module. Because it involves many, many real-life problems. But in fact, many things in most of the environments we perceive are static. So in this static environment we don't need to use additional computing power to calculate things that can be stored in the database in advance. For example, if a certain building is at a certain location, no matter how many times the vehicle drives to this location, the building it sees every time is at that point. Regardless of the perception method, the position of the building will not change with the vehicle's perception. Then the specific location of this kind of thing can be collected by a high-precision map collection vehicle, and then the data can be stored on the local hard drive of the unmanned vehicle. Then the unmanned vehicle can use this database to know that there is a building there every time it drives to that building without identifying it. Just like the positioning module, if we know the high-precision coordinates of this building, we can reversely find our own position based on these coordinates. Moreover, the computing power can be concentrated to identify dynamic objects other than this building based on the shape of the building and its physical characteristics that have been prepared in advance.
- HD map and control module
The specific content of the control is very detailed, and I am not very clear about it. But if it is to control the steering angle, the data of the center line of the lane line provided by the high-precision map is essential. Because although the lane line will be identified based on the camera, the position of the center line will then be identified based on this lane line. But these things are still not as accurate as the data provided by high-precision maps. After all, camera-based lane line recognition is real-time and will definitely make mistakes occasionally. Or when the lane lines become unclear because they have not been maintained for a long time, then the camera cannot recognize the corresponding lane line information. In this case, high-precision maps are needed. Lane lines are a very important piece of data in human driving, so current camera-based autonomous driving can only be implemented on highways. Because only the lane lines on the highway are in a better state of maintenance. Can be identified relatively easily. In contrast, lane lines in urban environments are not well maintained. Therefore, camera-based autonomous driving on urban roads is not yet advisable.
- The production process of high-precision maps
To be honest, I have not covered it here either But I heard it from the teacher. High-definition maps are first used to sweep streets by vehicles equipped with various sensors. After this kind of vehicle scans each street, it can obtain relevant point cloud information, camera information and other high-precision longitude and latitude information. The staff will then further edit them offline based on this information. What is involved here is point cloud splicing, road information recognized by cameras, such as lane lines, zebra crossings, and traffic lights. These static objects need to be further confirmed and marked by staff. Although the camera on the acquisition vehicle will perform preliminary feature recognition to extract these road-related features. But after all, it is based on computer vision, and the information extracted is not 100% correct. It may be wrong, or some annotations may not be well marked. Therefore, the last step still requires staff to conduct final confirmation and marking.
- The high-precision map production process supports edge computing V2X high-precision map service
The above iterative update process of the map can be applied to realize L4/L5 level driverless functions and generate relevant robot control modes. It can also be used in the realization of commercial vehicles to ultimately realize driverless and even remote driving. drive.
02 High-precision integrated positioning solution for mass production
Obviously, in order to achieve precise positioning and continuously extend forward to improve its functional performance, high-precision maps must Obtained by continuously optimizing its own integrated positioning solution. This process involves two main software algorithms. The first is to perform dynamic optimal estimation of vehicle pose through full-state extended Kalman filtering; the second is to use visual sensors to obtain semantic information of the road environment and obtain precise positions through precise map matching algorithms. In addition, there is a need to improve economy, fit and overall performance. By choosing to configure industrial-grade vehicle-mounted terminal RTK: using high-performance industrial-grade 32-bit processor, built-in high-precision RTK board; establishing a channel with Qianxun platform through 3G/4G/5G, sending GGA information to the differential server, and receiving differential signals at the same time After receiving the information, the precise location information is output through RS232.
The most important process of high-precision maps It includes the collection and distribution of map crowdsourcing. Regarding the collection of crowdsourced map data, it can actually be understood that the road data collected by users through the self-driving vehicle's own sensors or other low-cost sensor hardware is transmitted to the cloud for data fusion, and the data is improved through data aggregation. accuracy to complete the production of high-precision maps. The entire crowdsourcing process actually includes physical sensor reporting, map scene matching, scene clustering, change detection and updating.
04 Where will the new architecture of autonomous driving developed based on maps
go?
The current high-precision map architecture of the autonomous driving system is still oriented to a distributed approach. Its key concerns include map crowdsourcing collection, the analysis of the original information of the high-precision map by the map box, and how the map interacts with other sensors. Input data for fusion, etc. Let us note here that the future autonomous driving system architecture will continue to evolve from distributed development methods to centralized ones. The centralized approach can be seen in three or two steps:
Step1: Fully centralized control scheme for the intelligent driving domain
That is, integrating intelligent driving ADS and intelligent parking The AVP system carries out fully centralized control and uses a central pre-processing device to integrate, predict, plan and other processing methods for the information to be processed in the two systems. The processing methods of all sensing and data units related to smart driving and smart parking (high-precision maps, lidar, fully distributed cameras, millimeter-wave radar, etc.) will be integrated into the central domain control unit accordingly.
Step2: Fully centralized control scheme for the intelligent driving domain and intelligent cockpit domain
This method is the second stage to achieve a fully centralized distribution method, namely the intelligent driving domain All functional development covered by the controller (such as autonomous driving, automatic parking) and all functional development covered by the smart cockpit domain (including driver monitoring DMS, audio-visual entertainment system iHU, and instrument display system IP) are integrated and covered.
Step3: Fully centralized control solution for the entire intelligent vehicle domain
This is a fully integrated control method that includes intelligent driving, intelligent cockpit and intelligent chassis domains. That is, the three main functions are integrated into the vehicle central control unit, and the later processing of this data will create more performance (computing power, bandwidth, storage, etc.) requirements for the domain controller.
The high-precision map positioning development we are concerned about here will be more oriented towards centralized design methods in the future. We will elaborate on this.
The figure above shows the architectural development trend of high-precision maps in future autonomous driving system control. In the future, autonomous driving systems will strive to integrate the sensing unit, decision-making unit, and map positioning unit into the central domain control unit, aiming to reduce the dependence on high-precision map boxes from the bottom up. The design of its domain controller fully considers the full integration of AI computing chip SOC, logic computing chip MCU, and high-precision map box.
The above figure shows the corresponding high-precision map sensor data collection, data learning, AI training, high-precision map services, simulation and other services under the entire cloud control logic. , at the same time, during the movement and verification process of the vehicle, the map data will be continuously updated through physical sensing, dynamic data sensing, map target sensing, positioning, path planning and other contents, and OTA will be uploaded to the cloud to update the overall crowdsourced data.
The previous article described the process of how high-precision map data generates relevant data that can be processed by the autonomous driving controller. We know that the original data processed by high-precision maps is EHP data. The data actually contains the following main data support:
1: Received external GPS location information;
2: Location information matched to the map ;
3: Establish road network topology information;
4: Send data through CAN;
5: Fusion of partial navigation data;
The data is generally directly processed from the HDMap sensing end through Gigabit Ethernet and then input to the high-precision map central processing unit. We call the unit "high-precision map box". Through further processing of the data through the map box (we will explain this actual processing process in detail in a subsequent article), it can be converted into EHR (actually CanFD) data that can be processed by the autonomous driving controller.
For the next generation of autonomous driving systems, we are committed to integrating high-precision map information into the autonomous driving domain controller for overall processing. This process means that we The domain controller needs to take over all the data parsing work performed by the map box, so we need to focus on the following points:
1) Whether the AI chip of the autonomous driving domain controller can handle high-speed All the sensor data needed for a refined map?
2) Does the logical operation unit of the high-precision positioning map have enough computing power to perform sensor data information fusion?
3) Does the entire underlying operating system meet functional safety requirements?
4) What connection method is used between the AI chip and the logic chip to ensure the reliability of data transmission, Ethernet or CanFD?
In order to answer the above questions we need to analyze the way the controller processes high-precision map data as shown in the figure below.
is the AI chip of the autonomous driving system. It will be mainly responsible for the basic processing of sensor data in future high-precision map data processing, including camera data, Lidar data, millimeter wave data, etc. In addition to basic data point cloud fusion and clustering, the applied processing methods also include commonly used deep learning algorithms, and ARM cores are generally used for central computing processing.
As the autonomous driving domain controller logical operation unit, the MCU will subsequently undertake all the logical calculations required for the original high-precision map box. Including front-end vector aggregation, sensor fusion positioning, building road network maps, and most importantly, replacing the original map box function to convert EHP information into EHR signals (how the central processing unit MCU can effectively convert EHP information into EHR information Will be detailed in a later article) and perform effective signal transmission through Can lines. Finally, AutoBox, a logical operation unit, is used for path planning, decision-making control and other operations.
05 Summary
Future autonomous driving will tend to integrate all data information processed by high-precision maps from the original map box into the autonomous driving domain In the controller, it aims to establish a true central processing integration with the vehicle domain controller as the integrated unit. This method can not only save more computing resources, but also enable the AI data processing algorithm to be better applied to high-precision positioning, ensuring the consistency of the two's understanding of the environment. We need to pay more attention to the important direction of high-precision sensor data integration in the future, and put more effort into chip computing power, interface design, bandwidth design and functional safety design.
The above is the detailed content of Overview of key technical elements in the development of intelligent driving. For more information, please follow other related articles on the PHP Chinese website!

arXiv论文“Insertion of real agents behaviors in CARLA autonomous driving simulator“,22年6月,西班牙。由于需要快速prototyping和广泛测试,仿真在自动驾驶中的作用变得越来越重要。基于物理的模拟具有多种优势和益处,成本合理,同时消除了prototyping、驾驶员和弱势道路使用者(VRU)的风险。然而,主要有两个局限性。首先,众所周知的现实差距是指现实和模拟之间的差异,阻碍模拟自主驾驶体验去实现有效的现实世界

特斯拉是一个典型的AI公司,过去一年训练了75000个神经网络,意味着每8分钟就要出一个新的模型,共有281个模型用到了特斯拉的车上。接下来我们分几个方面来解读特斯拉FSD的算法和模型进展。01 感知 Occupancy Network特斯拉今年在感知方面的一个重点技术是Occupancy Network (占据网络)。研究机器人技术的同学肯定对occupancy grid不会陌生,occupancy表示空间中每个3D体素(voxel)是否被占据,可以是0/1二元表示,也可以是[0, 1]之间的

当前主流的AI芯片主要分为三类,GPU、FPGA、ASIC。GPU、FPGA均是前期较为成熟的芯片架构,属于通用型芯片。ASIC属于为AI特定场景定制的芯片。行业内已经确认CPU不适用于AI计算,但是在AI应用领域也是必不可少。 GPU方案GPU与CPU的架构对比CPU遵循的是冯·诺依曼架构,其核心是存储程序/数据、串行顺序执行。因此CPU的架构中需要大量的空间去放置存储单元(Cache)和控制单元(Control),相比之下计算单元(ALU)只占据了很小的一部分,所以CPU在进行大规模并行计算

gPTP定义的五条报文中,Sync和Follow_UP为一组报文,周期发送,主要用来测量时钟偏差。 01 同步方案激光雷达与GPS时间同步主要有三种方案,即PPS+GPRMC、PTP、gPTPPPS+GPRMCGNSS输出两条信息,一条是时间周期为1s的同步脉冲信号PPS,脉冲宽度5ms~100ms;一条是通过标准串口输出GPRMC标准的时间同步报文。同步脉冲前沿时刻与GPRMC报文的发送在同一时刻,误差为ns级别,误差可以忽略。GPRMC是一条包含UTC时间(精确到秒),经纬度定位数据的标准格

2 月 16 日消息,特斯拉的新自动驾驶计算机,即硬件 4.0(HW4)已经泄露,该公司似乎已经在制造一些带有新系统的汽车。我们已经知道,特斯拉准备升级其自动驾驶硬件已有一段时间了。特斯拉此前向联邦通信委员会申请在其车辆上增加一个新的雷达,并称计划在 1 月份开始销售,新的雷达将意味着特斯拉计划更新其 Autopilot 和 FSD 的传感器套件。硬件变化对特斯拉车主来说是一种压力,因为该汽车制造商一直承诺,其自 2016 年以来制造的所有车辆都具备通过软件更新实现自动驾驶所需的所有硬件。事实证

arXiv论文“Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline“, 2022年6月,上海AI实验室和上海交大。当前的端到端自主驾驶方法要么基于规划轨迹运行控制器,要么直接执行控制预测,这跨越了两个研究领域。鉴于二者之间潜在的互利,本文主动探索两个的结合,称为TCP (Trajectory-guided Control Prediction)。具

定位在自动驾驶中占据着不可替代的地位,而且未来有着可期的发展。目前自动驾驶中的定位都是依赖RTK配合高精地图,这给自动驾驶的落地增加了不少成本与难度。试想一下人类开车,并非需要知道自己的全局高精定位及周围的详细环境,有一条全局导航路径并配合车辆在该路径上的位置,也就足够了,而这里牵涉到的,便是SLAM领域的关键技术。什么是SLAMSLAM (Simultaneous Localization and Mapping),也称为CML (Concurrent Mapping and Localiza

什么是交通标志识别系统?汽车安全系统的交通标志识别系统,英文翻译为:Traffic Sign Recognition,简称TSR,是利用前置摄像头结合模式,可以识别常见的交通标志 《 限速、停车、掉头等)。这一功能会提醒驾驶员注意前面的交通标志,以便驾驶员遵守这些标志。TSR 功能降低了驾驶员不遵守停车标志等交通法规的可能,避免了违法左转或者无意的其他交通违法行为,从而提高了安全性。这些系统需要灵活的软件平台来增强探测算法,根据不同地区的交通标志来进行调整。交通标志识别原理交通标志识别又称为TS


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

WebStorm Mac version
Useful JavaScript development tools

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Dreamweaver CS6
Visual web development tools

Atom editor mac version download
The most popular open source editor

SublimeText3 English version
Recommended: Win version, supports code prompts!
