search
HomeTechnology peripheralsAIOne article to understand lidar and visual fusion perception of autonomous driving

2022 is the window period for intelligent driving to move from L2 to L3/L4. More and more automobile manufacturers have begun to deploy higher-level intelligent driving mass production, and the era of automobile intelligence has quietly arrived.

With the technical improvement of lidar hardware, car-grade mass production and cost reduction, high-level intelligent driving functions have promoted the mass production of lidar in the field of passenger cars. A number of models equipped with lidar will be delivered this year, and 2022 is also known as "the first year of lidar on the road."

01 Lidar sensor vs image sensor

Lidar is a sensor used to accurately obtain the three-dimensional position of an object. It is essentially a laser Detection and ranging. With its excellent performance in target contour measurement and universal obstacle detection, it is becoming the core configuration of L4 autonomous driving.

However, the range measurement range of lidar (generally around 200 meters, and the mass production models of different manufacturers have different indicators) results in a perception range that is much smaller than that of image sensors.

And because its angular resolution (generally 0.1° or 0.2°) is relatively small, the resolution of the point cloud is much smaller than that of the image sensor. When sensing at a long distance, it is projected to the target object. The points on the image may be so sparse that they cannot even be imaged. For point cloud target detection, the effective point cloud distance that the algorithm can really use is only about 100 meters.

Image sensors can acquire complex surrounding information at high frame rates and high resolutions, and are cheap. Multiple sensors with different FOV and resolutions can be deployed for different distances and ranges. visual perception, the resolution can reach 2K-4K.

However, the image sensor is a passive sensor with insufficient depth perception and poor ranging accuracy. Especially in harsh environments, the difficulty of completing sensing tasks will increase significantly.

In the face of strong light, low illumination at night, rain, snow, fog and other weather and light environments, intelligent driving has high requirements on sensor algorithms. Although lidar is not sensitive to the influence of ambient light, the distance measurement will be greatly affected by waterlogged roads, glass walls, etc.

It can be seen that lidar and image sensors each have their own advantages and disadvantages. Most high-level intelligent driving passenger cars choose to integrate different sensors to complement each other's advantages and integrate redundancy.

Such a fused sensing solution has also become one of the key technologies for high-level autonomous driving.

02 Point cloud and image fusion perception based on deep learning

The fusion of point cloud and image belongs to Multi-Sensor Fusion ,MSF) technology field, there are traditional random methods and deep learning methods, which are mainly divided into three levels according to the abstraction level of information processing in the fusion system:

Data layer fusion (Early Fusion)

First fuse the sensor observation data, and then extract features from the fused data for identification. In 3D target detection, PointPainting (CVPR20) adopts this method. The PointPainting method first performs semantic segmentation on the image, maps the segmented features to the point cloud through a point-to-image pixel matrix, and then "draws the point" The point cloud is sent to the 3D point cloud detector to perform regression on the target Box.

One article to understand lidar and visual fusion perception of autonomous driving

##Feature layer fusion (Deep Fusion)

First extract the natural data features from the observation data provided by each sensor, and then fuse these features for identification. In the fusion method based on deep learning, this method uses feature extractors for both the point cloud and the image branch. The networks of the image branch and the point cloud branch are fused semantically level by level in the forward feedback level to achieve multi-scale information. semantic fusion.

The feature layer fusion method based on deep learning has high requirements for spatiotemporal synchronization between multiple sensors. Once the synchronization is not good, it will directly affect the effect of feature fusion. At the same time, due to differences in scale and viewing angle, it is difficult for the feature fusion of LiDAR and images to achieve the effect of 1 1>2.

One article to understand lidar and visual fusion perception of autonomous driving

Decision-making layer fusion (Late Fusion)

Compared with the first two, it is the least complex fusion method. It does not fuse at the data layer or feature layer, but is a target-level fusion. Different sensor network structures do not affect each other and can be trained and combined independently.

Since the two types of sensors and detectors fused at the decision-making layer are independent of each other, once a sensor fails, sensor redundancy processing can still be performed, and the engineering robustness is better.

One article to understand lidar and visual fusion perception of autonomous driving

With the continuous iteration of lidar and visual fusion perception technology, as well as the continuous accumulation of knowledge scenarios and cases, it will More and more full-stack converged computing solutions are emerging to bring a safer and more reliable future for autonomous driving.

The above is the detailed content of One article to understand lidar and visual fusion perception of autonomous driving. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
SOA中的软件架构设计及软硬件解耦方法论SOA中的软件架构设计及软硬件解耦方法论Apr 08, 2023 pm 11:21 PM

​对于下一代集中式电子电器架构而言,采用central+zonal 中央计算单元与区域控制器布局已经成为各主机厂或者tier1玩家的必争选项,关于中央计算单元的架构方式,有三种方式:分离SOC、硬件隔离、软件虚拟化。集中式中央计算单元将整合自动驾驶,智能座舱和车辆控制三大域的核心业务功能,标准化的区域控制器主要有三个职责:电力分配、数据服务、区域网关。因此,中央计算单元将会集成一个高吞吐量的以太网交换机。随着整车集成化的程度越来越高,越来越多ECU的功能将会慢慢的被吸收到区域控制器当中。而平台化

新视角图像生成:讨论基于NeRF的泛化方法新视角图像生成:讨论基于NeRF的泛化方法Apr 09, 2023 pm 05:31 PM

新视角图像生成(NVS)是计算机视觉的一个应用领域,在1998年SuperBowl的比赛,CMU的RI曾展示过给定多摄像头立体视觉(MVS)的NVS,当时这个技术曾转让给美国一家体育电视台,但最终没有商业化;英国BBC广播公司为此做过研发投入,但是没有真正产品化。在基于图像渲染(IBR)领域,NVS应用有一个分支,即基于深度图像的渲染(DBIR)。另外,在2010年曾很火的3D TV,也是需要从单目视频中得到双目立体,但是由于技术的不成熟,最终没有流行起来。当时基于机器学习的方法已经开始研究,比

多无人机协同3D打印盖房子,研究登上Nature封面多无人机协同3D打印盖房子,研究登上Nature封面Apr 09, 2023 am 11:51 AM

我们经常可以看到蜜蜂、蚂蚁等各种动物忙碌地筑巢。经过自然选择,它们的工作效率高到叹为观止这些动物的分工合作能力已经「传给」了无人机,来自英国帝国理工学院的一项研究向我们展示了未来的方向,就像这样:无人机 3D 打灰:本周三,这一研究成果登上了《自然》封面。论文地址:https://www.nature.com/articles/s41586-022-04988-4为了展示无人机的能力,研究人员使用泡沫和一种特殊的轻质水泥材料,建造了高度从 0.18 米到 2.05 米不等的结构。与预想的原始蓝图

如何让自动驾驶汽车“认得路”如何让自动驾驶汽车“认得路”Apr 09, 2023 pm 01:41 PM

与人类行走一样,自动驾驶汽车想要完成出行过程也需要有独立思考,可以对交通环境进行判断、决策的能力。随着高级辅助驾驶系统技术的提升,驾驶员驾驶汽车的安全性不断提高,驾驶员参与驾驶决策的程度也逐渐降低,自动驾驶离我们越来越近。自动驾驶汽车又称为无人驾驶车,其本质就是高智能机器人,可以仅需要驾驶员辅助或完全不需要驾驶员操作即可完成出行行为的高智能机器人。自动驾驶主要通过感知层、决策层及执行层来实现,作为自动化载具,自动驾驶汽车可以通过加装的雷达(毫米波雷达、激光雷达)、车载摄像头、全球导航卫星系统(G

超逼真渲染!虚幻引擎技术大牛解读全局光照系统Lumen超逼真渲染!虚幻引擎技术大牛解读全局光照系统LumenApr 08, 2023 pm 10:21 PM

实时全局光照(Real-time GI)一直是计算机图形学的圣杯。多年来,业界也提出多种方法来解决这个问题。常用的方法包通过利用某些假设来约束问题域,比如静态几何,粗糙的场景表示或者追踪粗糙探针,以及在两者之间插值照明。在虚幻引擎中,全局光照和反射系统Lumen这一技术便是由Krzysztof Narkowicz和Daniel Wright一起创立的。目标是构建一个与前人不同的方案,能够实现统一照明,以及类似烘烤一样的照明质量。近期,在SIGGRAPH 2022上,Krzysztof Narko

一文聊聊智能驾驶系统与软件升级的关联设计方案一文聊聊智能驾驶系统与软件升级的关联设计方案Apr 11, 2023 pm 07:49 PM

由于智能汽车集中化趋势,导致在网络连接上已经由传统的低带宽Can网络升级转换到高带宽以太网网络为主的升级过程。为了提升车辆升级能力,基于为车主提供持续且优质的体验和服务,需要在现有系统基础(由原始只对车机上传统的 ECU 进行升级,转换到实现以太网增量升级的过程)之上开发一套可兼容现有 OTA 系统的全新 OTA 服务系统,实现对整车软件、固件、服务的 OTA 升级能力,从而最终提升用户的使用体验和服务体验。软件升级触及的两大领域-FOTA/SOTA整车软件升级是通过OTA技术,是对车载娱乐、导

internet的基本结构与技术起源于什么internet的基本结构与技术起源于什么Dec 15, 2020 pm 04:48 PM

internet的基本结构与技术起源于ARPANET。ARPANET是计算机网络技术发展中的一个里程碑,它的研究成果对促进网络技术的发展起到了重要的作用,并未internet的形成奠定了基础。arpanet(阿帕网)为美国国防部高级研究计划署开发的世界上第一个运营的封包交换网络,它是全球互联网的始祖。

综述:自动驾驶的协同感知技术综述:自动驾驶的协同感知技术Apr 08, 2023 pm 03:01 PM

arXiv综述论文“Collaborative Perception for Autonomous Driving: Current Status and Future Trend“,2022年8月23日,上海交大。感知是自主驾驶系统的关键模块之一,然而单车的有限能力造成感知性能提高的瓶颈。为了突破单个感知的限制,提出协同感知,使车辆能够共享信息,感知视线之外和视野以外的环境。本文回顾了很有前途的协同感知技术相关工作,包括基本概念、协同模式以及关键要素和应用。最后,讨论该研究领域的开放挑战和问题

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft