search
HomeTechnology peripheralsAIA brief analysis of image distortion correction technology on the difficulties of smart car perception front-end processing

We know that when shooting images, the ideal position of the camera is perpendicular to the shooting plane, so as to ensure that the image can be reproduced according to the original geometric proportion. However, in the actual application of smart driving cars, due to the limitations of the smart car body structure, body control requires the camera to have a certain preview distance. The horizontal and vertical scanning surfaces of the camera usually expand in a fan shape, and the camera is generally in contact with the ground. Install at an angle. The existence of this angle will cause certain imaging distortion at the edge of the image. The result of distortion is a series of similar problems as follows during post-image processing:

1) Vertical lines are photographed as oblique lines, resulting in slope calculation errors;

2) Distant curves may be compressed due to distortion, resulting in curvature calculation errors, etc.;

3) For the side lane vehicle status, serious problems occur during the recognition process. Distortion will cause mismatching problems during post-processing;

Various problems such as the above may exist in the entire image perception. If the distortion is not handled properly, it will affect the entire image. Image quality and subsequent neural network recognition pose greater risks. In order to meet the real-time control requirements of smart cars, it is generally necessary to propose corresponding correction algorithms for camera image distortion in practical application scenarios.

01 The main types of distortion in smart cars

Camera distortion includes radial distortion, tangential distortion, centrifugal distortion, thin Prismatic distortion, etc. The camera distortions on smart cars mainly include radial distortion and tangential distortion.

Radial distortion is divided into barrel distortion and pincushion distortion.

Usually, the surround-view camera used in our smart parking system uses wide-angle shooting, and its corresponding distortion type is usually radial distortion. The main cause of radial distortion is the irregular change of the radial curvature of the lens, which will cause the distortion of the image. The characteristic of this distortion is that the principal point is the center and moves along the radial direction. The farther away it is, the farther away it is. The greater the amount of deformation produced. Severe radial distortion of a rectangle needs to be corrected into an ideal linear lens image before it can enter the back-end processing process.

The front-view, side-view, and rear-view cameras generally used in driving systems use general CMOS process cameras for shooting, and the lens may not be guaranteed during the installation process of the front-side view camera. Strictly parallel to the imaging plane, it may also be due to manufacturing defects that the lens is not parallel to the imaging plane, resulting in tangential distortion. This phenomenon usually occurs when the imager is attached to the camera.

A brief analysis of image distortion correction technology on the difficulties of smart car perception front-end processing

There are a total of 5 distortion parameters in the radial distortion and tangential distortion models. In Opencv, they are arranged as A 5*1 matrix, containing k1, k2, p1, p2, k3 in sequence, is often defined in the form of a Mat matrix.

For distortion correction, these 5 parameters are the 5 distortion coefficients of the camera that need to be determined during camera calibration. The parameters k1, k2, and k3 are called radial distortion parameters, where k3 is an optional parameter. For cameras with severe distortion (such as eye cameras), there may also be k4, k5, and k6. Tangential distortion can be represented by two parameters p1 and p2: So far, a total of five parameters have been obtained: K1, K2, K3, P1, P2. These five parameters are necessary to eliminate distortion and are called distortion vectors, also They are called camera external parameters.

Therefore, after obtaining these five parameters, the deformation distortion of the image caused by lens distortion can be corrected. The following figure shows the effect after correction according to the lens distortion coefficient:

The formula to find the correct position of this point on the pixel plane through 5 distortion coefficients is as follows:

A brief analysis of image distortion correction technology on the difficulties of smart car perception front-end processing

After distortion The point can be projected to the pixel plane through the internal parameter matrix to obtain the correct position (u, v) of the point on the image:

A brief analysis of image distortion correction technology on the difficulties of smart car perception front-end processing

02 Image distortion correction method

Different from the camera model methodology, image dedistortion is to compensate for lens defects. Perform radial/tangential dedistortion and then use the camera model. The method of dealing with image distortion mainly involves which camera model to choose for image projection.

Typical camera model projection methods include spherical model and cylindrical model.

1. Fisheye camera imaging distortion correction

Usually similar to fisheye lens Produces great deformation. For example, during the imaging process of an ordinary camera, a straight line is still a straight line of a certain size when projected onto the image plane. However, when the image captured by a fisheye camera is projected onto the image plane, it will become a very large and long straight line, even in some scenes. Lower line detection is projected to infinity, so the pinhole model cannot model a fisheye lens.

In order to project the largest possible scene into a limited image plane, the first version of the fisheye lens is composed of more than a dozen different lenses. During the imaging process, the incident light After varying degrees of refraction and projection onto an imaging plane of limited size, the fisheye lens has a larger field of view compared with ordinary lenses.

Research shows that the model followed by fisheye cameras when imaging is approximately the unit sphere projection model. Here, in order to better adapt to the derivation process of the camera pinhole model, the common method is to use the process of projection to the spherical camera model.

The analysis of the fisheye camera imaging process can be divided into two steps:

  • Three-dimensional space points are linearly projected onto a sphere, Of course, this sphere is the virtual sphere we assume, and its center is considered to coincide with the origin of the camera coordinates.
  • The point on the unit sphere is projected onto the image plane. This process is nonlinear.

The following figure shows the image processing process from a fisheye camera to a spherical camera in a smart driving system. Assume that the point in the camera coordinate system is X=(x,y,z) and the pixel coordinate is x=(u,v). Then its projection process is expressed as follows:

A brief analysis of image distortion correction technology on the difficulties of smart car perception front-end processing

1) The first step is to use the camera to collect the three-dimensional image in the world coordinate system point, and project the imaging point in the image coordinate system onto the normalized unit spherical coordinates;

A brief analysis of image distortion correction technology on the difficulties of smart car perception front-end processing

2) Deviating the camera coordinate center by units along the z-axis, we get the following:

A brief analysis of image distortion correction technology on the difficulties of smart car perception front-end processing

3) Consider the unit sphere and divide the spherical surface into Normalize to 1 unit:

A brief analysis of image distortion correction technology on the difficulties of smart car perception front-end processing

4) Transform the spherical projection model to the pinhole model to get The corresponding principal point coordinates, with which the corresponding standard camera coordinate system model can be established:

A brief analysis of image distortion correction technology on the difficulties of smart car perception front-end processing

2. Cylindrical coordinate projection

For terminals such as front-view and side-view cameras, the images captured usually mainly produce tangential distortion. For tangential distortion, it is usually recommended to use a cylindrical camera model. Its advantage is that the user can switch the line of sight arbitrarily within a 360-degree range in a panoramic view such as a fisheye camera, and can also change the perspective on a line of sight to achieve the effect of getting closer or farther away. , at the same time, cylindrical panoramic images are also easier to process, because the cylindrical surface can be cut along the axis and unfolded on a plane. Traditional image processing methods can often be used directly. Cylindrical panoramic images do not require the camera to be calibrated very accurately. . Users have a 360-degree viewing angle in the horizontal direction and can also make certain changes in viewing angle in the vertical direction, but the angle range is limited. Since the image quality of the cylindrical model is uniform and the details are more realistic, it has a wider range of applications.

Generally speaking, the significant advantages of cylindrical panorama are summarized in the following two points:

1) The acquisition of its single photo The method is simpler than obtaining the cubic form and spherical form. Ordinary vehicle-mounted cameras (such as front-view and side-view cameras) can basically obtain original images.

2) The cylindrical panorama is easily expanded into a rectangular image, which can be directly stored and accessed using commonly used computer image formats. The cylindrical panorama allows the rotation angle of the participant's line of sight to be less than 180 degrees in the vertical direction, but in most applications, a 360-degree panoramic scene in the horizontal direction is sufficient to express spatial information.

A brief analysis of image distortion correction technology on the difficulties of smart car perception front-end processing

Here we focus on the algorithm of how to use a cylindrical camera to correct distortion of the original image. In fact, this It is a process of obtaining the mapping relationship from the virtual camera to the original camera. The virtual camera here refers to the mapping relationship between real images and generated cylindrical images.

The following figure shows the image processing process of converting from ordinary vehicle camera shooting to cylindrical camera in a smart driving system. Among them, the essence of obtaining a virtual camera image is to find the mapping relationship between the virtual camera and the original camera. The general process is as follows:

A brief analysis of image distortion correction technology on the difficulties of smart car perception front-end processing

##First, the front/side view original video image can be set as the target image dst img; where the main point (u, v) on the target map is the basic point for 2D to 3D back projection transformation to the target camera coordinate map above, the target camera can reconstruct the point position (x, y, z) in the world coordinate system; then, the corresponding original camera image Src Camera under the virtual camera is obtained through the projection transformation algorithm in the three-dimensional coordinate system; after By performing a 3D to 2D projection transformation on the original camera image, the corresponding corrected image Src img (u', v') can be obtained. This image can be reconstructed to restore the original image dst img under the virtual camera.

It can be seen from the cylindrical camera model that the transformation formula from the cylindrical camera model to the pinhole camera model is as follows:

A brief analysis of image distortion correction technology on the difficulties of smart car perception front-end processing

In the above formula, u and v represent the principal point of the pinhole camera plane (also called the coordinates in the pixel coordinate system), and fx, fy, cx, and cy represent manufacturing or installation errors. The two coordinate axis skew parameters generated. The principal point is multiplied by the radius distance in the cylindrical coordinate system to obtain the corresponding projection on the cylindrical coordinates.

ρ to perform polynomial approximation. The process of cylindrical camera 2D->3D space is uncertain. When Tdst=Tsrc, when ρ takes different values, from 3D space ->The mapping of the virtual camera obtained by the side-view/front-view camera 2D is the same; if Tdst! =Tsrc, the obtained virtual camera image changes with ρ. For a given cylindrical 2D position (u, v), under a given ρ condition, the 3D camera coordinates xc, yc, zc in the dst camera cylindrical coordinate system can be calculated according to the above formula.

Φ is used to perform polynomial approximation. Φ is the angle between the incident light and the image plane. This value is very similar to the parameters of a fisheye camera.

The next step is the camera transformation process, which can be summarized as follows.

First set the virtual camera image resolution to the resolution of the bird's-eye view IPM map you want to obtain; the principal point of the virtual camera image is the center of the IPM map resolution (generally assumed not to be set offset). Secondly, set the fx, fy and camera position of the virtual camera. The height is set to 1, which corresponds to the setting method of fx and fy. The offset of y can be modified according to needs. From this, the dst camera camera coordinates (xc, yc, zc) dst can be converted into the observation coordinate system vcs coordinates according to the external parameters (R, T) dst of the target camera dst camera, and then combined with the external parameters of the src camera ( R, T) src, convert VCS coordinates to src camera camera coordinates (xc, yc, zc) src.

A brief analysis of image distortion correction technology on the difficulties of smart car perception front-end processing

##03 Summary

Since car cameras usually carry different Imaging lens, this multi-element structure makes it impossible to simply adapt the original pinhole camera model to the analysis of the refraction relationship of the vehicle camera. Especially for fisheye cameras, due to the need to expand the viewing range, the image distortion caused by this refractive index is even more obvious. In this article, we focus on the de-distortion method adapted to various types of visual sensors in intelligent driving systems. We mainly use projection to project the image in the world coordinate system into the virtual spherical coordinate system and the virtual cylindrical coordinate system, thereby relying on 2D- >3D camera transformation to remove distortion. Some algorithms have been improved on the basis of long-term practice compared with the classic dedistortion algorithm.

The above is the detailed content of A brief analysis of image distortion correction technology on the difficulties of smart car perception front-end processing. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Windows 11 上的智能应用控制:如何打开或关闭它Windows 11 上的智能应用控制:如何打开或关闭它Jun 06, 2023 pm 11:10 PM

智能应用控制是Windows11中非常有用的工具,可帮助保护你的电脑免受可能损害数据的未经授权的应用(如勒索软件或间谍软件)的侵害。本文将解释什么是智能应用控制、它是如何工作的,以及如何在Windows11中打开或关闭它。什么是Windows11中的智能应用控制?智能应用控制(SAC)是Windows1122H2更新中引入的一项新安全功能。它与MicrosoftDefender或第三方防病毒软件一起运行,以阻止可能不必要的应用,这些应用可能会减慢设备速度、显示意外广告或执行其他意外操作。智能应用

一文聊聊SLAM技术在自动驾驶的应用一文聊聊SLAM技术在自动驾驶的应用Apr 09, 2023 pm 01:11 PM

定位在自动驾驶中占据着不可替代的地位,而且未来有着可期的发展。目前自动驾驶中的定位都是依赖RTK配合高精地图,这给自动驾驶的落地增加了不少成本与难度。试想一下人类开车,并非需要知道自己的全局高精定位及周围的详细环境,有一条全局导航路径并配合车辆在该路径上的位置,也就足够了,而这里牵涉到的,便是SLAM领域的关键技术。什么是SLAMSLAM (Simultaneous Localization and Mapping),也称为CML (Concurrent Mapping and Localiza

一文读懂智能汽车滑板底盘一文读懂智能汽车滑板底盘May 24, 2023 pm 12:01 PM

01什么是滑板底盘所谓滑板式底盘,即将电池、电动传动系统、悬架、刹车等部件提前整合在底盘上,实现车身和底盘的分离,设计解耦。基于这类平台,车企可以大幅降低前期研发和测试成本,同时快速响应市场需求打造不同的车型。尤其是无人驾驶时代,车内的布局不再是以驾驶为中心,而是会注重空间属性,有了滑板式底盘,可以为上部车舱的开发提供更多的可能。如上图,当然我们看滑板底盘,不要上来就被「噢,就是非承载车身啊」的第一印象框住。当年没有电动车,所以没有几百公斤的电池包,没有能取消转向柱的线传转向系统,没有线传制动系

智能网联汽车线控底盘技术深度解析智能网联汽车线控底盘技术深度解析May 02, 2023 am 11:28 AM

01线控技术认知线控技术(XbyWire),是将驾驶员的操作动作经过传感器转变成电信号来实现传递控制,替代传统机械系统或者液压系统,并由电信号直接控制执行机构以实现控制目的,基本原理如图1所示。该技术源于美国国家航空航天局(NationalAeronauticsandSpaceAdministration,NASA)1972年推出的线控飞行技术(FlybyWire)的飞机。其中,“X”就像数学方程中的未知数,代表汽车中传统上由机械或液压控制的各个部件及相关的操作。图1线控技术的基本原理

智能汽车规划控制常用控制方法详解智能汽车规划控制常用控制方法详解Apr 11, 2023 pm 11:16 PM

控制是驱使车辆前行的策略。控制的目标是使用可行的控制量,最大限度地降低与目标轨迹的偏差、最大限度地提供乘客的舒适度等。如上图所示,与控制模块输入相关联的模块有规划模块、定位模块和车辆信息等。其中定位模块提供车辆的位置信息,规划模块提供目标轨迹信息,车辆信息则包括档位、速度、加速度等。控制输出量则为转向、加速和制动量。控制模块主要分为横向控制和纵向控制,根据耦合形式的不同可以分为独立和一体化两种方法。1 控制方法1.1 解耦控制所谓解耦控制,就是将横向和纵向控制方法独立分开进行控制。1.2 耦合控

一文读懂智能汽车驾驶员监控系统一文读懂智能汽车驾驶员监控系统Apr 11, 2023 pm 08:07 PM

驾驶员监控系统,缩写DMS,是英文Driver Monitor System的缩写,即驾驶员监控系统。主要是实现对驾驶员的身份识别、驾驶员疲劳驾驶以及危险行为的检测功能。福特DMS系统01 法规加持,DMS进入发展快车道在现阶段开始量产的L2-L3级自动驾驶中,其实都只有在特定条件下才可以实行,很多状况下需要驾驶员能及时接管车辆进行处置。因此,在驾驶员太信任自动驾驶而放弃或减弱对驾驶过程的掌控时可能会导致某些事故的发生。而DMS-驾驶员监控系统的引入可以有效减轻这一问题的出现。麦格纳DMS系统,

李飞飞两位高徒联合指导:能看懂「多模态提示」的机器人,zero-shot性能提升2.9倍李飞飞两位高徒联合指导:能看懂「多模态提示」的机器人,zero-shot性能提升2.9倍Apr 12, 2023 pm 08:37 PM

人工智能领域的下一个发展机会,有可能是给AI模型装上一个「身体」,与真实世界进行互动来学习。相比现有的自然语言处理、计算机视觉等在特定环境下执行的任务来说,开放领域的机器人技术显然更难。比如prompt-based学习可以让单个语言模型执行任意的自然语言处理任务,比如写代码、做文摘、问答,只需要修改prompt即可。但机器人技术中的任务规范种类更多,比如模仿单样本演示、遵照语言指示或者实现某一视觉目标,这些通常都被视为不同的任务,由专门训练后的模型来处理。最近来自英伟达、斯坦福大学、玛卡莱斯特学

AutoGPT star量破10万,这是首篇系统介绍自主智能体的文章AutoGPT star量破10万,这是首篇系统介绍自主智能体的文章Apr 28, 2023 pm 04:10 PM

在GitHub上,AutoGPT的star量已经破10万。这是一种新型人机交互方式:你不用告诉AI先做什么,再做什么,而是给它制定一个目标就好,哪怕像「创造世界上最好的冰淇淋」这样简单。类似的项目还有BabyAGI等等。这股自主智能体浪潮意味着什么?它们是怎么运行的?它们在未来会是什么样子?现阶段如何尝试这项新技术?在这篇文章中,OctaneAI首席执行官、联合创始人MattSchlicht进行了详细介绍。人工智能可以用来完成非常具体的任务,比如推荐内容、撰写文案、回答问题,甚至生成与现实生活无

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment