Home  >  Article  >  Technology peripherals  >  LiDAR sensing technology solution under severe weather conditions

LiDAR sensing technology solution under severe weather conditions

王林
王林forward
2023-05-10 16:07:06865browse

01 Abstract

Autonomous vehicles rely on various sensors to collect information about their surrounding environment. The vehicle's behavior is planned based on environmental awareness, so its reliability is crucial for safety reasons. Active lidar sensors are capable of creating accurate 3D representations of scenes, making them a valuable addition to autonomous vehicles’ environmental awareness. LiDAR performance changes in adverse weather conditions such as fog, snow, or rain due to light scattering and occlusion. This limitation has recently spurred considerable research on methods to mitigate perceptual performance degradation.

This paper collects, analyzes and discusses different aspects of LiDAR based environment sensing to deal with adverse weather conditions. Topics such as the availability of appropriate data, raw point cloud processing and denoising, robust perception algorithms, and sensor fusion to mitigate deficiencies caused by adverse weather are discussed. In addition, the paper further identifies the most pressing gaps in the current literature and identifies promising research directions.

02 Introduction

LiDAR sensors have recently received increasing attention in the field of autonomous driving[1]. It provides sparse but accurate depth information, making it a valuable complement to more deeply studied sensors such as cameras and radar. A lidar sensor is an active sensor, meaning it emits pulses of light that are reflected by the environment. The sensor then captures the reflected light and measures the distance to the environment based on the elapsed time. In addition to time, other characteristics can be evaluated, such as the amount of light and the elongation of the signal. In most cases, there are mechanical components combined with multiple laser diodes to create a sparse point cloud of a complete scene [1]. There are various different sensors on the market.

Lidar sensors have different disadvantages in adverse weather conditions. First, sensor freezing or other mechanical complications can occur at freezing temperatures. Internal and structural factors such as sensor technology, model and mounting location play a role in the degree of deterioration. Additionally, adverse weather can affect intensity values, point counts, and other point cloud characteristics (see Figure 1). Generally speaking, when particles are encountered in the air due to dust or severe weather, the emitted light is backscattered or diverted. This results in noisy distance and reflectivity measurements in the point cloud because some laser pulses return to the sensor prematurely or are lost in the atmosphere. Noise is particularly harmful when applying scene understanding algorithms. In this safety-critical use case, maintaining reliably high predictive performance is especially important. Therefore, countermeasures are needed to minimize the degradation of lidar sensing performance under adverse weather conditions, or at least detect the sensor's limitations in real-world scenarios.

LiDAR sensing technology solution under severe weather conditions

Most state-of-the-art algorithms rely on deep learning (DL) algorithms, which rely on large amounts of data to derive universal characteristics of the environment. While there is a line of research focused on unsupervised perception, most recent work requires corresponding labeling of raw data. This includes bounding boxes for object detection and point-wise class labels for semantic segmentation. Manually labeling sparse and noisy point clouds is not only difficult, but also costly and error-prone. Therefore, the question of how to simulate or enhance existing point clouds with weather-specific noise is particularly interesting.

Although there is a large amount of research on analyzing the performance degradation of lidar sensors under severe weather conditions, a comprehensive summary of algorithmic countermeasures to improve perception is lacking. Additionally, investigations into autonomous driving in severe weather conditions addressed weather-induced sensor degradation but did not identify weather-related issues specific to lidar sensors. This article summarizes and analyzes various methods of coping with adverse weather conditions for lidar sensing. Therefore, the paper addresses this topic from three different perspectives:

  • Data availability: real-world and synthetic datasets for developing robust lidar perception algorithms;
  • Point cloud operations: sensor-specific weather robustness and perception-independent point cloud processing (e.g. weather classification, point cloud denoising);
  • Robust Perception: Robust perception algorithms are capable of handling weather-induced noise in point clouds by fusing multiple sensors, adjusting during training, or improving the overall robustness of the perception model.

Finally, the missing gaps in the current technology and the most promising research directions are summarized.

03 Adverse weather data

To train a DL model on any type of perception task, a large amount of data is required. For the supervised methods that still dominate, this data even has to be labeled by automated labeling methods or manually. Either way, obtaining accurately labeled sparse lidar data is expensive and cumbersome, and is further hampered when the raw point cloud is corrupted by weather-induced noise.

Therefore, valuable datasets with high-quality labels are needed. Generally, there are three options for obtaining lidar point clouds with weather-characteristic noise patterns: real-world recordings, enhanced point clouds, and simulated point clouds. The first was generated using a test vehicle with an appropriate sensor setup in severe weather conditions. The latter approach requires physical models or DL-based methods to create parts or entire point clouds.

Real World Datasets

Most existing datasets for lidar perception benchmarks are in Recorded under favorable weather conditions. In order to use the developed perception algorithms in the real world, the underlying dataset must reflect all weather conditions. In addition to clear weather conditions, there are extensive data sets that explicitly include rain, snow, and fog.

Table I shows an overview of publicly available datasets used to study lidar perception in severe weather conditions. The data sets were recorded under different conditions and vary greatly in size. Most of them were actually recorded in real-world driving scenarios, while two of them were (partly) from weather chambers. Weather chambers have the advantage of having complete control over weather conditions and the surrounding environment, i.e. in terms of obstacles. Nonetheless, they do not fully reflect real-world situations.

LiDAR sensing technology solution under severe weather conditions

# Additionally, each dataset uses different sensor settings. [27] specifically benchmarked lidar manufacturers and models under severe weather conditions. In addition to lidar sensors, all datasets provide RGB camera recordings, and some datasets even include radar, stereo, event, gated or infrared cameras.

These datasets are designed to address different perception and driving tasks of autonomous vehicles. Almost all sensor setups (except [21]) include positioning and motion sensors, namely GPS/GNSS and IMU. Therefore, they are suitable for developing and testing SLAM algorithms. All datasets provide labels for object detection or point-wise segmentation except [29] which only provides motion GT.

Finally, all datasets include some metadata about weather conditions. This is critical for developing almost any type of perception model in severe weather conditions. Understanding the intensity and nature of the surrounding weather conditions is crucial, at least for thorough verification. Only one dataset provides point-wise weather labels, namely roadside snowfall and snow accumulation.

The advantage of a data set consisting of real-world records is that it is highly realistic. The disadvantage is that labels for recorded scenes are only partially available (point by point) or, if the data are recorded in a weather chamber, limited to more complex real-world scenarios. Manual point-by-point labeling of lidar point clouds under adverse weather conditions is particularly challenging because in many cases it is impractical to distinguish clutter or noise from the actual reflected signal.

Weather Augmentation

Extending adverse weather effects to existing datasets provides a way to generate large amounts of data efficient approach rather than tediously collecting and labeling new data sets of different adverse weather effects. Typically, physics-based or empirically based augmentation models are used to augment certain adverse weather effects into clear weather point clouds, whether they are driven from the real world or from simulations. This allows to obtain scenes corrupted by weather-specific noise, while preserving all the interesting edge cases and annotations already present in the dataset.

The enhancement method defines the mapping from clear weather points to corresponding points under adverse weather conditions. For this purpose, reference is often made to the theoretical lidar model in [32], which models the effects of adverse rain, fog and snow. It models the received intensity distribution as a linear system by convolving the emitted pulse with the scene response. Scene response models reflections from solid objects as well as backscatter and attenuation due to severe weather.

A more practical fog enhancement is introduced in

[9], which can be directly applied to point clouds. It is based on the maximum sight range which is a function of measurement intensity, lidar parameters and optical visibility in fog. If the distance of a clear weather point is lower than the maximum viewing distance, a random scatter point will appear, or the point will be missing with a certain probability. The model accommodates rainfall by converting visibility parameters and scattering probabilities into rainfall rates.

However, these models ignore the beam divergence of the emitted lidar pulses considered for rain enhancement. Here, the number of intersections of a supersampled beam simulating beam divergence with a spherical raindrop is calculated. If the number of intersections exceeds a certain threshold, a scatter point is added. The augmentation method in [35] extends this approach such that missing points are possible. Additionally, it works well in snow and fog.

Another enhancement for fog, snow and rain is introduced in [36]. The model operates in the power domain and does not rely, for example, on calculating crossover points like the previously discussed methods. Additionally, a computationally more efficient scattering point distance sampling strategy is used to simulate beam divergence. Typically, the model first compares the attenuated power reflected from solid objects and randomly sampled scatterers to a distance-dependent noise threshold. Scatter points are added if their power exceeds the power of the solid object. If a point is below the distance-dependent noise threshold, the point is lost.

In addition to physics-based models, empirical models can also be used for enhancement. Empirical enhancement methods for spray rolled up by other vehicles can be found in [38]. Central to this model is the observation from dedicated experiments that sprays are organized into clusters. Another data-driven approach is proposed in [39], which relies on spray scenes from the Waymo dataset. In [40], a more computationally expensive spray enhancement method is proposed that relies on a renderer with a physics engine.

Finally, DL-based methods can be applied to adverse weather enhancement. In [41], inspired by image-to-image translation, a method based on generative adversarial networks (GAN) is proposed, which is able to transform point clouds from sunny to foggy or rainy days. They qualitatively compared their results with real fog and rain clouds from weather chambers.

However, assessing the quality and realism of augmentation methods is challenging. Some authors use weather chambers or other controlled environments to allow comparisons with real-world weather effects. Furthermore, an augmentation method is generally considered realistic if it contributes to perceived performance under real-world adverse weather conditions.

04 Point Cloud Processing and Denoising

This section describes methods for dealing with adverse weather conditions, which are based on sensor technology or point cloud, that is, independent of the actual perception task. Therefore, the paper analyzes general sensor-related weather robustness and the possibility of estimating the degree of performance degradation depending on weather conditions. Furthermore, there are numerous studies on removing weather-induced noise from lidar point clouds using classical denoising methods and DL.

Sensor-related weather robustness

Depending on the technology, features and configuration, different lidar models or multiple May be less affected by weather conditions. Due to eye safety restrictions and suppression of ambient light, two operating wavelengths of lidar sensors dominate: 905nm and 1550nm, with 905nm being the most available sensor. This is partly due to better performance in adverse weather conditions, i.e. lower absorption by raindrops, better reflectivity in snow and less degradation in fog. For a comprehensive discussion of lidar technology and wavelengths under severe weather conditions, we refer to [17].

In addition, the performance of full-waveform lidar (FWL) under severe weather conditions was also studied. FWL measures not just one or two echoes, but all weaker echoes, effectively measuring more noise but also gathering more information about the surrounding environment. Although FWL requires high computational resources, it has proven useful for analyzing the surrounding medium, which can provide the basis for understanding even changing conditions and dynamically adjusting them.

Sensor degradation estimation and weather classification

Since lidar sensors degrade differently under different weather conditions, estimating the degree of sensor degradation is the first step in processing damaged lidar point clouds. Progress has been made in developing methods to better identify sensing limits to prevent false detections from propagating to downstream tasks.

First of all, some studies on characterizing sensor degradation under various weather conditions have laid a solid foundation for the calibration and further development of sensors in severe weather conditions, although they have not yet been weathered Classification ability is assessed.

The first work to actually simulate the impact of rainfall on lidar sensors is presented in [33]. The authors propose a mathematical model derived from the lidar equation and allow performance degradation estimation based on rainfall rate and maximum sensing range.

In subsequent research work, the estimation of sensor degradation under severe weather conditions was formulated as an anomaly detection task and a verification task. The former employs a DL-based model designed to learn a latent representation that distinguishes a clear LiDAR scan from a rainy LiDAR scan, thereby being able to quantify the extent of performance degradation. The latter approach proposes using reinforcement learning (RL) models to identify failures in object detection and tracking models.

While the above methods aim to quantify the degradation of sensor performance itself, another line of research focuses on the classification of ambient weather conditions (i.e. clear, rainy, fog and snow). Satisfactory results were achieved with the help of classic machine learning methods (k-nearest neighbors and support vector machines) based on hand-crafted features of LiDAR point clouds: [10] proposed a feature set to perform point-wise Weather classification.

[51] developed a probabilistic model for frame-by-frame regression of rainfall rate. Working with experts, they accurately inferred rainfall rates from lidar point clouds.

It should be noted that most methods are trained and evaluated on data collected in weather chambers. While the ability to carefully control weather conditions allows for high reproducibility, the data often do not accurately reflect real-world conditions. In order to evaluate the classification ability of each method, a thorough study on real-world data is necessary [50].

Point cloud denoising

Weather effects are reflected in lidar point clouds as specific noise patterns. As mentioned in Section 1, they may affect factors such as the number of measurements in the point cloud and the maximum sensing range. Point clouds can be denoised by various methods to reconstruct clear measurements rather than augmenting the point cloud with weather-specific noise. In addition to classic filtering algorithms, some DL-based denoising work has recently emerged.

In addition to applying perceptual tasks such as object detection on denoised point clouds, metrics such as precision (preserving environmental features) and recall (filtering out weather-induced noise) are useful for evaluating classic filtering methods performance is crucial. To calculate these metrics, point-by-point markers are needed to account for weather categories such as snow particles.

Radius Outlier Removal (ROR) filters out noise based on the neighborhood of any point. This becomes problematic for LiDAR measurements of distant objects, as point clouds become naturally sparse. Advanced methods address this problem by dynamically adjusting the threshold based on the sensing distance (Dynamic Radius Outlier Removal (DROR)) or taking into account the average distance to the neighbors of each point in the point cloud (Statistical Outlier Removal). Both methods exhibit high running times, making them almost unsuitable for autonomous driving. Fast Clustering Statistical Outlier Removal (FCSOR) and Dynamic Statistical Outlier Removal (DSOR) both propose methods to reduce computational load while still removing weather artifacts from point clouds.

The denoising method of roadside lidar relies on a background model of historical data (which can be used to fix roadside sensors), combined with the basic principles used in classic denoising to identify dynamic points. [57] filtered weather noise from real objects with the help of intensity thresholds. Unfortunately, this is not easily applicable to lidar sensors mounted on moving vehicles.

Contrary to classic denoising methods, DL-based lidar point cloud denoising is popular because the model is able to directly understand the underlying structure of weather noise: first, based on convolutional neural networks (CNN) The model has been used for effective weather denoising. Using temporal data for differentiation further exploits weather-specific noise removal because, naturally, weather noise changes more frequently than the scene background or even objects within the scene. CNN-based methods (especially voxel-based methods) outperform classic denoising methods in terms of noise filtering. Additionally, since GPUs compute faster, their inference times are lower.

In addition to supervised CNN methods, unsupervised methods like CycleGANs are able to transform noisy point cloud inputs into clear lidar scans. However, their nature is still noisy and the resulting point clouds are difficult to verify for their authenticity.

05 Robust LiDAR Perception

Despite good efforts to reduce field shifts caused by adverse weather, there are There are several possible ways to make lidar sensing models more robust to adverse weather conditions, independent of the quality and noise level of the data. There are three workflows here: leveraging sensor fusion, enhancing training through data augmentation with weather-specific noise, or a general approach to model robustness against domain shifts to compensate for performance degradation. It should be noted that sensor fusion methods are the only ones that solve multiple sensing tasks besides object detection. To the best of the paper's knowledge, there is no literature on other perceptual tasks such as semantic segmentation.

Using sensor fusion to deal with severe weather

It can generally be said that each sensor in the autonomous driving sensor group Has its advantages and disadvantages. The most common sensors in this sensor group are RGB cameras, radar and lidar. As mentioned in Section 1, lidar perception is affected when encountering visible airborne particles such as dust, rain, snow, or fog. The camera is more sensitive to strong light incidence and halo effects. Radar, in turn, is immune to both, but lacks the ability to detect static objects and fine structures. Therefore, it forces itself to fuse different sensors in order to mitigate their respective shortcomings and promote robust perception under different environmental conditions.

Early work on sensor fusion to combat the adverse effects of weather on sensor perception focused on developing robust data correlation frameworks. Recent research streams utilize DL-based methods for robust multi-modal perception and mainly address the issue of early versus late fusion to achieve robustness under adverse weather conditions.

The choice of pre- or post-fusion appears to depend on sensor selection, data representation, and expected failure rates. Assuming that not all fused sensors are degraded to the same extent and that at least one of them is fully functional, late fusion appears to be better than early fusion. In this case, the model is able to process the sensor flow independently, it is able to rely on working sensors and ignore faulty sensors. Instead, early fusion of radar and lidar depth maps helps filter false detections to achieve clean scans.

Data representation is another factor that partially helps answer the early versus late fusion question. The Bird's Eye View (BEV) of lidar sensors greatly facilitates object detection by improving the resolvability of objects. Therefore, any model that has learned to rely on the respective lidar features will suffer a performance loss when the lidar data is corrupted. Complete sensor failure was successfully resolved using a teacher-student network.

Ultimately, some sensor fusion methods rely on combining early and late fusion into a single model and exploit temporal data and region-based fusion [72] or attention maps [73] etc. concept. Another possibility is the fusion of adaptive,entropy control proposed in [21].

In addition to predictive performance, model runtime should also be considered when developing new perception methods. [68] introduced a new metric that combines the predictive performance that drives spatial segmentation with the inference runtime. Interestingly, the lidar-only model scored highest on this metric.

There is no doubt that it is convenient to compensate for sensor failure with unaffected sensors during adverse weather conditions. However, by working to improve perception in adverse weather conditions using only lidar, safety-critical applications such as autonomous driving can become more reliable.

Enhancing training through data augmentation

While data augmentation is widely used in DL training strategies, weather-specific noise is particularly challenging of production. Section 2 presents various methods for generating weather-specific noise in lidar point clouds. Utilizing data augmentation during the training of perceptual models is a radial approach to point cloud denoising, which has been discussed in III. The goal is not to remove weather-induced noise, but to get the model accustomed to this exact noise. It has been shown that weather enhancement is more effective than denoising in terms of robustness, which provides valuable hints on which research directions should be emphasized in the future.

Generally, some works have demonstrated the benefits of such data augmentation at training time by evaluating these data in the task 3D object detection.

Much work has dealt with the problem of selecting the best feature extractor for robust lidar perception under severe weather conditions. Point-based and voxelized methods appear to be less affected by enhanced weather effects, at least with respect to object detection, suggesting that some robustness can be achieved through careful choice of perceptual models. Additionally, there appears to be an interaction between model architecture and point cloud damage caused by severe weather. The wetland extension proposed in [4] only helped some models, showing that detection problems caused by ray scattering are more or less severe, depending on the model architecture.

Additionally, the size and shape of objects appear to play a role in the degree to which any detection model's performance degrades. This means that smaller and underrepresented classes, such as the cyclist class in the STF dataset, are more susceptible to weather enhancements than better represented classes, such as cars and pedestrians. Therefore, the number of (clear) annotated objects in the training set is a good indicator of object detection performance even under adverse weather conditions. This shows that weather-augmented training not only helps improve detection performance in clear weather conditions, but interestingly, it also seems to have the opposite effect.

Robust perception algorithm

Although the fusion method with complementary sensors can alleviate the weather-related problems caused by each sensor performance degradation, but they can only be used as a solution to current practical problems. Changes in weather conditions can be considered as a special case of domain transfer, and therefore, the methods developed to bridge the domain gap can be applied to domain transfer of weather-to-weather (e.g., sunny/fog/snow). [77] provide a comprehensive overview of adaptive methods in the current state of the art, but they mainly address problems related to different sensor resolutions or available data and their labels.

In [78], the authors proposed dataset-to-dataset domain transfer, which indirectly includes weather changes. They used a teacher-student setting for object detection, where teachers were trained on Waymo Open (sunny days) and generated labels for part Waymo Open and part Kirkland (rainy days), and students were trained on all labels and applied to Kirkland. Interestingly, students seemed to generalize better to the target domain, suggesting they were able to cope with severe weather. However, it should be noted that domain gaps are not limited to changes between weather conditions, and other factors such as sensor resolution and labeling strategies may mask gaps caused by weather.

The authors of [79] proposed a robust object detection pipeline, including attention mechanism and global contextual feature extraction, which enables the model to ignore weather-induced noise while understanding the entire Scenes. Although their method does not perform well on both domains (KITTI, sunny and CADC, rainy) at the same time, joint training based on the maximum difference loss yields promising results and performs well on both the source and target domains. high performance. Here again, it is unclear which elements of the model are attributable to changes in weather conditions themselves, as dataset-to-dataset variation appears to be very strong.

[80] focuses on mitigating sensor degradation caused by weather in RGB cameras and lidar. Although they exploit sensor fusion (derived from entropy fusion proposed in [21]) as well as data augmentation from two sensors, their work strongly promotes the use of a set of methods to bridge the gap with multiple unknown target domains, for target detection. They achieve this by introducing domain discriminators and self-supervised learning through pre-training strategies. Their results show that their multi-modal, multi-object domain adaptation method generalizes well to, for example, fog scenes.

06 Discussion and Conclusion

In this survey paper, this paper outlines current research directions in lidar-based environmental perception for autonomous driving under adverse weather conditions. The paper provides an in-depth analysis and discussion of the availability of training data for deep learning algorithms, perception-independent point cloud processing techniques for detecting weather conditions and denoising lidar scans, and state-of-the-art methods for robust lidar perception. In the following, the most promising research directions are summarized and remaining gaps are identified.

Adverse Weather Data: There are several autonomous driving data sets that include lidar sensors and cover adverse weather conditions as well. Most of them provide object labels, but only one has point-wise class labels. Clearly, suitable real-world data sets are needed to train and validate the growing number of deep learning-based lidar sensing algorithms. Some works employ weather-specific data augmentation to simulate adverse weather effects, however, a method to evaluate the realism of the generated augmentations is missing.

Point cloud processing and denoising: Different lidar technologies react differently to adverse weather conditions. While sensor degradation under severe weather conditions has been intensively studied, there is a lack of systematic analysis of the impact on sensing algorithms. Here, methods of sensor degradation estimation will be useful. Additionally, there is currently ongoing research into cloud denoising, but existing statistical methods have proven to be less effective than using weather augmentation methods in training. Modern methods, such as those based on CNNs or GANs, may bridge this gap.

Robust lidar sensing: A lot of research focuses on mitigating sensor degradation with the help of sensor fusion. While this produced compelling results, improving perception capabilities using lidar alone in adverse weather conditions should not be overlooked. Sophisticated domain adaptation methods such as anomaly detection or uncertainty modeling may help solve this problem. Observing the presence of weather-induced noise in lidar point clouds from a different perspective may open new research streams that bridge the gaps in the field caused by adverse weather conditions. The quality of investigating gaps in this domain will suggest the potential of a general domain adaptation approach.

The above is the detailed content of LiDAR sensing technology solution under severe weather conditions. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete