Home >Technology peripherals >AI >The key step for 'getting on the car' for large models: the world's first language + autonomous driving open source data set is here

The key step for 'getting on the car' for large models: the world's first language + autonomous driving open source data set is here

PHPz
PHPzforward
2023-09-16 20:13:021285browse

DriveLM is a language-based driver project that contains a data set and a model. With DriveLM, we introduce the inference capabilities of large language models in autonomous driving (AD) to make decisions and ensure explainable planning.

In DriveLM’s dataset, we use human-written reasoning logic as connections to facilitate perception, prediction, and planning (P3). In our model, we propose an AD visual language model with mind mapping capabilities to produce better planning results. Currently, we have released a demo version of the dataset, and the complete dataset and model will be released in the future

Project link: https://github.com/OpenDriveLab/DriveLM The content that needs to be rewritten is: Project link: https://github.com/OpenDriveLab/DriveLM

The key step for getting on the car for large models: the worlds first language + autonomous driving open source data set is here

The key step for getting on the car for large models: the worlds first language + autonomous driving open source data set is here

What is Graph-of -Thoughts in AD?

The most exciting aspect of the dataset is that the question answering (QA) in P3 is connected in a graph-style structure, with QA pairs as each node and the relationships of the objects as edges.

Compared to pure language thinking trees or thinking maps, we prefer multi-modality. In the AD domain, we do this because each stage defines the AD task, from raw sensor input to final control action

The key step for getting on the car for large models: the worlds first language + autonomous driving open source data set is here

The key step for getting on the car for large models: the worlds first language + autonomous driving open source data set is here

What is included in the DriveLM dataset?

Build our dataset based on the mainstream nuScenes dataset. The core element of DriveLM is frame-based P3 QA. Perception problems require models to recognize objects in a scene. The prediction problem requires the model to predict the future state of important objects in the scene. Planning problems prompt the model to give reasonable planning actions and avoid dangerous actions.

How is the calibration process?

  1. Keyframe selection. Given all the frames in a clip, the annotator will select the keyframes that need to be annotated. The standard is that these frameworks should involve changes in the self-vehicle's motion state (lane changes, sudden stops, starting after stopping, etc.).
  2. Key object selection. Given a keyframe, the annotator needs to pick up key objects in six surrounding images. The standard is that these objects should be able to affect the own vehicle (traffic lights, pedestrians crossing the street, other vehicles)
  3. Q&A Comments. Given these key objects, we automatically generate single or multiple object questions about perception, prediction, and planning. More details can be found in our demo data.

The above is the detailed content of The key step for 'getting on the car' for large models: the world's first language + autonomous driving open source data set is here. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete