Home >Technology peripherals >AI >Virtual-real domain adaptation method for autonomous driving lane detection and classification

Virtual-real domain adaptation method for autonomous driving lane detection and classification

WBOY
WBOYforward
2023-04-08 14:31:121700browse

arXiv paper "Sim-to-Real Domain Adaptation for Lane Detection and Classification in Autonomous Driving", May 2022, work at the University of Waterloo, Canada.

Virtual-real domain adaptation method for autonomous driving lane detection and classification

While supervised detection and classification frameworks for autonomous driving require large annotated datasets, Unsupervised Domain Adaptation (UDA) driven by synthetic data generated by illuminating real simulated environments , Unsupervised Domain Adaptation) method is a low-cost, less time-consuming solution. This paper proposes a UDA scheme of adversarial discriminative and generative methods for lane line detection and classification applications in autonomous driving.

Also introduces the Simulanes dataset generator, which takes advantage of CARLA's huge traffic scenes and weather conditions to create a natural synthetic dataset. The proposed UDA framework takes the labeled synthetic dataset as the source domain, while the target domain is the unlabeled real data. Use adversarial generation and feature discriminator to debug the learning model and predict the lane location and category of the target domain. Evaluation is performed with real and synthetic datasets.

The open source UDA framework is at​​githubcom​​/anita-hu/sim2real-lane-detection, and the data set generator is at github.com/anita-hu/simulanes.

Real-world driving is diverse, with varying traffic conditions, weather, and surrounding environments. Therefore, the diversity of simulation scenarios is crucial to the good adaptability of the model in the real world. There are many open source simulators for autonomous driving, namely CARLA and LGSVL. This article chooses CARLA to generate the simulation data set. In addition to the flexible Python API, CARLA also contains rich pre-drawn map content covering urban, rural and highway scenes.

Simulation data generator Simulanes generates a variety of simulation scenarios in urban, rural and highway environments, including 15 lane categories and dynamic weather. The figure shows samples from the synthetic dataset. Pedestrian and vehicle participants are randomly generated and placed on the map, increasing the difficulty of the dataset through occlusion. According to the TuSimple and CULane datasets, the maximum number of lanes near the vehicle is limited to 4, and row anchors are used as labels.

Virtual-real domain adaptation method for autonomous driving lane detection and classification

Since the CARLA simulator does not directly provide lane location labels, CARLA's waypoint system is used to generate labels. A CARLA waypoint is a predefined position for the vehicle autopilot to follow, located in the center of the lane. In order to obtain the lane position label, the waypoint of the current lane is moved left and right by W/2, where W is the lane width given by the simulator. These moved waypoints are then projected into the camera coordinate system and spline-fitted to generate labels along predetermined row anchor points. The class label is given by the simulator and is one of 15 classes.

To generate a dataset with N frames, divide N evenly across all available maps. From the default CARLA map, towns 1, 3, 4, 5, 7 and 10 were used, while towns 2 and 6 were not used due to differences between the extracted lane position labels and the lane positions of the image. For each map, vehicle participants are spawned at random locations and move randomly. Dynamic weather is achieved by smoothly changing the position of the sun as a sinusoidal function of time and occasionally producing storms, which affect the appearance of the environment through variables such as cloud cover, water volume and standing water. To avoid saving multiple frames at the same location, check if the vehicle has moved from the previous frame's location and regenerate a new vehicle if it has been stationary for too long.

When the sim-to-real algorithm is applied to lane detection, an end-to-end approach is adopted and the Ultra-Fast-Lane-Detection (UFLD) model is used as the basic network. UFLD was chosen because its lightweight architecture can achieve 300 frames/second at the same input resolution while achieving performance comparable to state-of-the-art methods. UFLD formulates the lane detection task as a row-based selection method, where each lane is represented by a series of horizontal positions of predefined rows, i.e., row anchors. For each row anchor, the position is divided into w grid cells. For the i-th lane and j-th row anchor, location prediction becomes a classification problem, where the model outputs the probability Pi,j of selecting (w 1) grid cell. The additional dimension in the output is no lanes.

UFLD proposes an auxiliary segmentation branch to aggregate features at multiple scales to model local features. This is only used during training. With the UFLD method, cross-entropy loss is used for segmentation loss Lseg. For lane classification, a small branch of the fully connected (FC) layer is added to receive the same features as the FC layer for lane position prediction. The lane classification loss Lcls also uses cross-entropy loss.

In order to alleviate the domain drift problem of UDA settings, UNIT ("Unsupervised Image-to-Image Translation Networks", NIPS, 2017) & MUNIT## are adopted #("Multimodal unsupervised image-to-image translation," ECCV 2018) adversarial generation method, and adversarial discriminative method using feature discriminator. As shown in the figure: an adversarial generation method (A) and an adversarial discrimination method (B) are proposed. UNIT and MUNIT are represented in (A), which shows the generator input for image translation. Additional style inputs to MUNIT are shown with dashed blue lines. For simplicity, the MUNIT-style encoder output is omitted as it is not used for image translation.

Virtual-real domain adaptation method for autonomous driving lane detection and classification

The experimental results are as follows:

Virtual-real domain adaptation method for autonomous driving lane detection and classification

Virtual-real domain adaptation method for autonomous driving lane detection and classification

Left: direct migration method, right : Adversarial Authentication (ADA) Method

The above is the detailed content of Virtual-real domain adaptation method for autonomous driving lane detection and classification. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete