search
HomeTechnology peripheralsAIAn article discussing the three core elements of autonomous driving

Sensors: Different positioning and functions, complementary advantages

Autonomous vehicles are often equipped with a variety of sensors, including cameras, millimeter wave radar, and lidar. These sensors each have different functions and positioning, and complement each other's advantages; as a whole, they become the eyes of autonomous vehicles. New cars after 2021 will be equipped with a large number of sensors to reserve redundant hardware so that more autonomous driving functions can be implemented later through OTA.

An article discussing the three core elements of autonomous driving

Sensor configuration and core functions of newly released domestic models from January to May 2021

The function of the camera: It is mainly used for lane lines, traffic signs, traffic lights, vehicle and pedestrian detection. It has the characteristics of comprehensive detection information and low price, but it will be affected by rain, snow and light. Impact. Modern cameras are composed of lenses, lens modules, filters, CMOS/CCD, ISP, and data transmission parts. The light is focused on the sensor after passing through the optical lens and filter. The optical signal is converted into an electrical signal through a CMOS or CCD integrated circuit, and then converted into a standard digital image in RAW, RGB or YUV format by the image processor (ISP). The signal is transmitted to the computer through the data transmission interface. Cameras can provide a wealth of information. However, the camera relies on natural light sources. The current dynamic range of the visual sensor is not particularly wide. When the light is insufficient or the light changes drastically, the visual image may be temporarily lost, and the function will be severely limited in rain and pollution conditions. In the industry Computer vision is usually used to overcome various shortcomings of cameras.

Car cameras are a high-growth market. The use of in-vehicle cameras is increasing with the continuous upgrading of autonomous driving functions. For example, 1-3 cameras are generally required for forward view and 4-8 cameras for surround view. It is expected that the global automotive camera market will reach 176.26 billion yuan by 2025, of which the Chinese market will reach 23.72 billion yuan.

An article discussing the three core elements of autonomous driving

Global and Chinese automotive camera market size from 2015 to 2025 (100 million yuan)

The automotive camera industry chain includes upstream lens set suppliers, glue material suppliers, image sensor suppliers, ISP chip suppliers, as well as midstream module suppliers and system integrators. , downstream consumer electronics companies, autonomous driving Tier1, etc. In terms of value, the image sensor (CMOS Image Sensor) accounts for 50% of the total cost, followed by module packaging, which accounts for 25%, and optical lenses, which account for 14%.

An article discussing the three core elements of autonomous driving

Camera Industry Chain

Laser The role of radar (Lidar): It is mainly used to detect the distance and speed of surrounding objects. At the transmitting end of the lidar, a high-energy laser beam is generated by a laser semiconductor. After the laser collides with the surrounding targets, it is reflected back, and is captured and calculated by the receiving end of the lidar to obtain the distance and speed of the target. LiDAR has higher detection accuracy than millimeter waves and cameras, and its detectable detection distance is long, often reaching more than 200 meters. LiDAR is divided into mechanical, rotating mirror, MEMS and solid-state LiDAR according to its scanning principle. According to the ranging principle, it can be divided into time-of-flight ranging (ToF) and frequency modulated continuous wave (FMCW). The industry is currently in the exploratory stage of lidar application, and there is no clear direction yet, and it is unclear which technical route will become the mainstream in the future.

The laser radar market is vast, and Chinese companies will lead the United States. The lidar market has broad prospects. We predict that by 2025, the Chinese lidar market will be close to 15 billion yuan, and the global market will be close to 30 billion yuan; by 2030, the Chinese lidar market will be close to 35 billion yuan, and the global market will be close to 65 billion yuan. The annual growth rate of the market reached 48.3%. Tesla, the largest self-driving company in the United States, adopts a pure vision solution. Other car companies have no specific plans to put lidar on cars. Therefore, China has become the largest potential market for automotive lidar. In 2022, a large number of domestic vehicle manufacturers will launch products equipped with lidar, and it is expected that shipments of automotive lidar products will reach 200,000 units in 2022. Chinese companies have a higher probability of winning because they are closer to the market, have a high degree of cooperation with Chinese OEMs, and can more easily obtain market orders, so they can reduce costs faster, forming a virtuous cycle. China's vast market will help Chinese lidar companies bridge the technology gap with foreign companies.

An article discussing the three core elements of autonomous driving

China LiDAR Market Outlook from 2022 to 2030

An article discussing the three core elements of autonomous driving

List of lidar models

Each technical route at the current stage has its own advantages and disadvantages. Our judgment is In the future, FMCW technology will coexist with TOF technology, 1550nm laser emitters will be better than 905nm, and the market may skip semi-solid state and jump directly to the all-solid state stage.

FMCW technology coexists with TOF technology: TOF technology is relatively mature and has the advantages of fast response speed and high detection accuracy, but it cannot directly measure speed; FMCW can be measured directly through the Doppler principle It has high speed and sensitivity (more than 10 times higher than ToF), strong anti-interference ability, long-distance detection, and low power consumption. In the future, high-end products may use FMCW and low-end products use TOF.

1550nm is better than 905 nm: 905nm is a near-infrared laser that is easily absorbed by the human retina and causes retinal damage, so the 905nm solution can only be maintained at low power. The principle of 1550nm laser is visible spectrum. Laser under the same power condition causes less damage to the human eye and has a longer detection range. However, the disadvantage is that it requires InGaAs as the generator and cannot use silicon-based detectors.

Skip semi-solid state and jump directly to all-solid state: The existing semi-solid state solutions include rotating mirror type, angular type, and MEMS, all of which have a small number of mechanical parts, short service life in the vehicle environment, and are difficult to Passed vehicle certification. The VCSEL SPAD solution for solid-state lidar adopts chip-level technology, has a simple structure, and can easily pass vehicle regulations. It has become the most mainstream technical solution for pure solid-state lidar at present. The lidar behind iPhone12 pro uses the VCSEL SPAD solution.

An article discussing the three core elements of autonomous driving

Technical route and representative enterprises of lidar

High-precision maps may be subverted. The battle for routes continues in the field of advanced maps. Tesla has proposed a high-precision map that does not require advance mapping. Based on the data collected by cameras, artificial intelligence technology is used to construct a three-dimensional space of the environment. It adopts crowdsourcing thinking and is composed of Each vehicle provides road information and is unified and aggregated in the cloud. Therefore, we need to be alert to the subversion of high-precision maps caused by technological innovation.

Some practitioners believe that high-precision maps are indispensable for intelligent driving. From the perspective of field of view, high-precision maps are not blocked and do not have distance and visual defects. Under special weather conditions Under certain conditions, high-precision maps can still play a role; in terms of errors, high-precision maps can effectively eliminate some sensor errors and can effectively supplement and correct existing sensor systems under some road conditions. In addition, high-precision maps can also build a driving experience database, analyze dangerous areas through multi-dimensional spatio-temporal data mining, and provide drivers with new driving experience data sets.

Lidar vision technology, collection vehicle and crowdsourcing model are the mainstream solutions for high-precision maps in the future.

High-precision maps need to balance the two measurement indicators of accuracy and speed. Too low collection accuracy and too low update frequency cannot meet the needs of autonomous driving for high-precision maps. In order to solve this problem, high-precision map companies have adopted some new methods, such as the crowdsourcing model. Each self-driving car serves as a high-precision map collection device to provide high-precision dynamic information, which is aggregated and distributed to other cars for use. . Under this model, leading high-precision map companies can collect more accurate and faster high-precision maps due to the large number of car models that can participate in crowdsourcing, maintaining a situation where the strong are always strong.

An article discussing the three core elements of autonomous driving

##​Amap Fusion Solution

Computing platform: The requirements for chips continue to increase, and semiconductor technology is the moat

The computing platform is also called an autonomous driving domain controller. As the penetration rate of autonomous driving above L3 increases, the requirements for computing power also increase. Although the current L3 regulations and algorithms have not yet been introduced, vehicle companies have adopted computing power redundancy solutions to reserve for subsequent software iterations. space.

The computing platform will have two development characteristics in the future: heterogeneity and distributed elasticity.

Heterogeneous: For high-end autonomous vehicles, the computing platform needs to be compatible with multiple types and data sensors and have high security and high performance. The existing single chip cannot meet many interface and computing power requirements, and a heterogeneous chip hardware solution is required. Heterogeneity can be reflected in a single board integrating multiple architecture chips, such as Audi zFAS integrated MCU (microcontroller), FPGA (programmable gate array), CPU (central processing unit), etc.; it can also be reflected in a powerful single chip (SoC, system-on-chip) integrates multiple architectural units at the same time, such as NVIDIA Xavier integrated GPU (graphics processor) and CPU two heterogeneous units.

Distribution flexibility: The current automotive electronics architecture consists of many single-function chips gradually integrated into domain controllers. High-end autonomous driving requires on-board intelligent computing platforms with features such as system redundancy and smooth expansion. On the one hand, taking into account the heterogeneous architecture and system redundancy, multiple boards are used to realize system decoupling and backup; on the other hand, multi-board distributed expansion is used to meet the computing power and interface requirements of high-end autonomous driving. The overall system collaborates to implement autonomous driving functions under the unified management and adaptation of the same autonomous driving operating system, and adapts different chips by changing hardware drivers, communication services, etc. As the level of autonomous driving increases, the system's demand for computing power, interfaces, etc. will increase day by day. In addition to increasing the computing power of a single chip, hardware components can also be stacked repeatedly to achieve flexible adjustment and smooth expansion of hardware components, thereby improving the computing power of the entire system, increasing interfaces, and improving functions.

The heterogeneous distributed hardware architecture mainly consists of three parts: AI unit, computing unit and control unit.

AI unit: adopts a parallel computing architecture AI chip, and uses a multi-core CPU to configure the AI ​​chip and necessary processors. Currently, AI chips are mainly used for efficient fusion and processing of multi-sensor data, and output key information for execution layer execution. The AI ​​unit is the most demanding part of the heterogeneous architecture and needs to break through the bottlenecks of cost, power consumption and performance to meet industrialization requirements. AI chips can choose GPU, FPGA, ASIC (application specific integrated circuit), etc.

An article discussing the three core elements of autonomous driving

Comparison of different types of chips

Computing unit: The computing unit consists of multiple CPUs. It has the characteristics of high single-core frequency and strong computing power, and meets the corresponding functional safety requirements. Loading Hypervisor, Linux's kernel management system, manages software resources, completes task scheduling, and is used to execute most of the core algorithms related to autonomous driving and integrate multi-dimensional data to achieve path planning and decision-making control.

Control unit: Mainly based on traditional vehicle controller (MCU). The control unit loads the basic software of the ClassicAUTOSAR platform, and the MCU is connected to the ECU through the communication interface to achieve horizontal and longitudinal control of vehicle dynamics and meet the functional safety ASIL-D level requirements.

Take the Tesla FSD chip as an example. The FSD chip adopts the CPU GPU ASIC architecture. Contains 3 quad-core Cortex-A72 clusters for a total of 12 CPUs running at 2.2 GHz; a Mali G71 MP12 GPU running at 1 GHz, 2 Neural Processing Units (NPUs), and various other hardware accelerators. There is a clear division of labor between the three types of sensors. The Cortex-A72 core CPU is used for general computing processing, the Mali core GPU is used for lightweight post-processing, and the NPU is used for neural network calculations. The GPU computing power reaches 600GFLOPS, and the NPU computing power reaches 73.73Tops.

An article discussing the three core elements of autonomous driving

Tesla FSD chip architecture

The core technology of the autonomous driving domain controller is the chip, followed by the software and operating system. The short-term moat is customers and delivery capabilities.

The chip determines the computing power of the autonomous driving computing platform. It is difficult to design and manufacture and can easily become a stuck link. The high-end market is dominated by international semiconductor giants NVIDIA, Mobileye, Texas Instruments, NXP, etc. In the L2 and below markets, domestic companies represented by Horizon are gradually gaining recognition from customers. China's domain controller manufacturers generally cooperate in depth with a chip manufacturer to purchase chips and deliver them to vehicle manufacturers with their own hardware manufacturing and software integration capabilities. Cooperation with chip companies is generally exclusive. From the perspective of chip cooperation, Desay SV has the most obvious advantages by tying up with Nvidia and Thunderstar with Qualcomm. Huayang Group, another domestic autonomous driving domain controller company, has tied up with Huawei HiSilicon and Neusoft Reach to establish cooperative relationships with NXP and Horizon.

An article discussing the three core elements of autonomous driving

The cooperative relationship between domestic domain controller companies and chip companies

The competitiveness of a domain controller is determined by the chip companies it cooperates with upstream. The downstream OEMs often purchase a complete set of solutions provided by the chip companies. For example, the high-end models of Weilai, Ideal, and Xpeng purchase NVIDIA Orin chips and NVIDIA autonomous driving software; Jikrypton and BMW purchase solutions from chip company Mobileye; Changan and Great Wall purchase Horizon's L2 solution. We should continue to pay attention to the cooperation between chip and domain controller companies.

An article discussing the three core elements of autonomous driving

##​Cooperation between chip company products and car companies

3. Data and Algorithms: Data helps to iterate algorithms, and algorithm quality is the core competitiveness of autonomous driving companies

User data is extremely important for transforming autonomous driving systems. In the process of autonomous driving, there is a rare scenario that is unlikely to occur. This type of scenario is called a corner case. If the sensing system encounters a corner case, it will cause serious security risks. For example, what happened in the past few years was that Tesla's Autopilot did not recognize a large white truck that was crossing and hit it directly from the side, causing the death of the owner; in April 2022, Xiaopeng crashed and rolled over while turning on autonomous driving. Vehicles in the middle of the road.

There is only one solution to such problems, which is for car companies to take the lead in collecting real data and at the same time simulate more similar environments on the autonomous driving computing platform so that the system can learn Better handle it next time. A typical example is Tesla's Shadow Mode: identifying potential corner cases by comparing it with human driver behavior. These scenes are then annotated and added to the training set.

Accordingly, car companies need to establish a data processing process so that the real data collected can be used for model iteration, and the iterated model can be installed on real mass-produced vehicles. At the same time, in order to let the machine learn cornercases on a large scale, after obtaining a cornercase, a large-scale simulation will be performed on the problems encountered in this cornercase to derive more cornercases for system learning. Nvidia DriveSim, a simulation platform developed by NVIDIA using Metaverse technology, is one of the simulation systems. Data-leading companies build data moats.

The common data processing process is:

1) Determine whether the autonomous vehicle encounters a corner case and upload it

2) Label the uploaded data

3) Use simulation software to simulate and create additional training data

4) Iteratively update the neural network model with data

5) Pass Deploy the model to real vehicles through OTA

An article discussing the three core elements of autonomous driving

Data processing process

Behind the data closed loop relies on data centers with extremely large computing power. According to NVIDIA’s speech at 2022CES, companies investing in L2 assisted driving systems only need 1-2000 GPUs to develop complete L4 autonomous driving. The systems company needs 25,000 GPUs to build a data center.

1. Tesla currently has 3 major computing centers with a total of 11,544 GPUs: the automatic marking computing center has 1,752 A100 GPUs, and the other two computing centers used for training have 4,032 GPUs respectively. , 5760 A100 GPUs; the self-developed DOJO supercomputer system released on 2021 AI DAY has 3000 D1 chips and a computing power of up to 1.1EFLOPS.

2. The Shanghai Supercomputing Center project under construction by SenseTime Technology has planned 20,000 A100 GPUs. Once completed, the peak computing power will reach 3.65EFLPOS (BF16/CFP8).

The above is the detailed content of An article discussing the three core elements of autonomous driving. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
SOA中的软件架构设计及软硬件解耦方法论SOA中的软件架构设计及软硬件解耦方法论Apr 08, 2023 pm 11:21 PM

​对于下一代集中式电子电器架构而言,采用central+zonal 中央计算单元与区域控制器布局已经成为各主机厂或者tier1玩家的必争选项,关于中央计算单元的架构方式,有三种方式:分离SOC、硬件隔离、软件虚拟化。集中式中央计算单元将整合自动驾驶,智能座舱和车辆控制三大域的核心业务功能,标准化的区域控制器主要有三个职责:电力分配、数据服务、区域网关。因此,中央计算单元将会集成一个高吞吐量的以太网交换机。随着整车集成化的程度越来越高,越来越多ECU的功能将会慢慢的被吸收到区域控制器当中。而平台化

新视角图像生成:讨论基于NeRF的泛化方法新视角图像生成:讨论基于NeRF的泛化方法Apr 09, 2023 pm 05:31 PM

新视角图像生成(NVS)是计算机视觉的一个应用领域,在1998年SuperBowl的比赛,CMU的RI曾展示过给定多摄像头立体视觉(MVS)的NVS,当时这个技术曾转让给美国一家体育电视台,但最终没有商业化;英国BBC广播公司为此做过研发投入,但是没有真正产品化。在基于图像渲染(IBR)领域,NVS应用有一个分支,即基于深度图像的渲染(DBIR)。另外,在2010年曾很火的3D TV,也是需要从单目视频中得到双目立体,但是由于技术的不成熟,最终没有流行起来。当时基于机器学习的方法已经开始研究,比

如何让自动驾驶汽车“认得路”如何让自动驾驶汽车“认得路”Apr 09, 2023 pm 01:41 PM

与人类行走一样,自动驾驶汽车想要完成出行过程也需要有独立思考,可以对交通环境进行判断、决策的能力。随着高级辅助驾驶系统技术的提升,驾驶员驾驶汽车的安全性不断提高,驾驶员参与驾驶决策的程度也逐渐降低,自动驾驶离我们越来越近。自动驾驶汽车又称为无人驾驶车,其本质就是高智能机器人,可以仅需要驾驶员辅助或完全不需要驾驶员操作即可完成出行行为的高智能机器人。自动驾驶主要通过感知层、决策层及执行层来实现,作为自动化载具,自动驾驶汽车可以通过加装的雷达(毫米波雷达、激光雷达)、车载摄像头、全球导航卫星系统(G

多无人机协同3D打印盖房子,研究登上Nature封面多无人机协同3D打印盖房子,研究登上Nature封面Apr 09, 2023 am 11:51 AM

我们经常可以看到蜜蜂、蚂蚁等各种动物忙碌地筑巢。经过自然选择,它们的工作效率高到叹为观止这些动物的分工合作能力已经「传给」了无人机,来自英国帝国理工学院的一项研究向我们展示了未来的方向,就像这样:无人机 3D 打灰:本周三,这一研究成果登上了《自然》封面。论文地址:https://www.nature.com/articles/s41586-022-04988-4为了展示无人机的能力,研究人员使用泡沫和一种特殊的轻质水泥材料,建造了高度从 0.18 米到 2.05 米不等的结构。与预想的原始蓝图

超逼真渲染!虚幻引擎技术大牛解读全局光照系统Lumen超逼真渲染!虚幻引擎技术大牛解读全局光照系统LumenApr 08, 2023 pm 10:21 PM

实时全局光照(Real-time GI)一直是计算机图形学的圣杯。多年来,业界也提出多种方法来解决这个问题。常用的方法包通过利用某些假设来约束问题域,比如静态几何,粗糙的场景表示或者追踪粗糙探针,以及在两者之间插值照明。在虚幻引擎中,全局光照和反射系统Lumen这一技术便是由Krzysztof Narkowicz和Daniel Wright一起创立的。目标是构建一个与前人不同的方案,能够实现统一照明,以及类似烘烤一样的照明质量。近期,在SIGGRAPH 2022上,Krzysztof Narko

一文聊聊智能驾驶系统与软件升级的关联设计方案一文聊聊智能驾驶系统与软件升级的关联设计方案Apr 11, 2023 pm 07:49 PM

由于智能汽车集中化趋势,导致在网络连接上已经由传统的低带宽Can网络升级转换到高带宽以太网网络为主的升级过程。为了提升车辆升级能力,基于为车主提供持续且优质的体验和服务,需要在现有系统基础(由原始只对车机上传统的 ECU 进行升级,转换到实现以太网增量升级的过程)之上开发一套可兼容现有 OTA 系统的全新 OTA 服务系统,实现对整车软件、固件、服务的 OTA 升级能力,从而最终提升用户的使用体验和服务体验。软件升级触及的两大领域-FOTA/SOTA整车软件升级是通过OTA技术,是对车载娱乐、导

internet的基本结构与技术起源于什么internet的基本结构与技术起源于什么Dec 15, 2020 pm 04:48 PM

internet的基本结构与技术起源于ARPANET。ARPANET是计算机网络技术发展中的一个里程碑,它的研究成果对促进网络技术的发展起到了重要的作用,并未internet的形成奠定了基础。arpanet(阿帕网)为美国国防部高级研究计划署开发的世界上第一个运营的封包交换网络,它是全球互联网的始祖。

综述:自动驾驶的协同感知技术综述:自动驾驶的协同感知技术Apr 08, 2023 pm 03:01 PM

arXiv综述论文“Collaborative Perception for Autonomous Driving: Current Status and Future Trend“,2022年8月23日,上海交大。感知是自主驾驶系统的关键模块之一,然而单车的有限能力造成感知性能提高的瓶颈。为了突破单个感知的限制,提出协同感知,使车辆能够共享信息,感知视线之外和视野以外的环境。本文回顾了很有前途的协同感知技术相关工作,包括基本概念、协同模式以及关键要素和应用。最后,讨论该研究领域的开放挑战和问题

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.