Home > Article > Technology peripherals > ViP3D: End-to-end visual trajectory prediction through 3D agent query
arXiv paper "ViP3D: End-to-end Visual Trajectory Prediction via 3D Agent Queries", uploaded on August 2, 22, Tsinghua University, Shanghai (Yao) Qizhi Research Institute, CMU, Fudan, Li Auto and MIT, etc. joint work.
Existing autonomous driving pipelines separate the perception module from the prediction module. The two modules communicate through manually selected features such as agent boxes and trajectories as interfaces. Due to this separation, the prediction module receives only partial information from the perception module. Worse, errors from the perception module can propagate and accumulate, adversely affecting prediction results.
This work proposes ViP3D, a visual trajectory prediction pipeline that uses the rich information of the original video to predict the future trajectory of the agent in the scene. ViP3D uses sparse agent query throughout the pipeline, making it fully differentiable and interpretable. In addition, a new evaluation index for the end-to-end visual trajectory prediction task is proposed, End-to-end Prediction Accuracy (EPA, End-to-end Prediction Accuracy), which comprehensively considers perception and prediction accuracy. At the same time, the predicted trajectory and the ground truth trajectory are scored.
The picture shows the comparison between the traditional multi-step cascade pipeline and ViP3D: the traditional pipeline involves multiple non-differentiable modules, such as detection, tracking and prediction; ViP3D takes multi-view video as input, in an end-to-end manner Generate predicted trajectories that effectively utilize visual information, such as vehicle turn signals.
ViP3D aims to solve the trajectory prediction problem of original videos in an end-to-end manner. Specifically, given a multi-view video and a high-definition map, ViP3D predicts the future trajectories of all agents in the scene.
The overall process of ViP3D is shown in the figure: First, the query-based tracker processes multi-view videos from surrounding cameras to obtain the query of the tracked agent with visual features. The visual features in the agent query capture the movement dynamics and visual characteristics of the agents, as well as the relationships between agents. After that, the trajectory predictor takes the query of the tracking agent as input and associates it with the HD map features, and finally outputs the predicted trajectory.
# Query-based tracker extracts visual features from the raw video of the surrounding camera. Specifically, for each frame, image features are extracted according to DETR3D. For time domain feature aggregation, a query-based tracker is designed according to MOTR ("Motr: End-to-end multiple-object tracking with transformer". arXiv 2105.03247, 2021), including two key steps :query feature update and query supervision. The agent query will be updated over time to model the movement dynamics of the agent.
Most existing trajectory prediction methods can be divided into three parts: agent encoding, map encoding and trajectory decoding. After query-based tracking, the query of the tracked agent is obtained, which can be regarded as the agent characteristics obtained through agent encoding. Therefore, the remaining tasks are map encoding and trajectory decoding.
Represent prediction and true value agents as unordered sets Sˆ and S respectively, where each agent is represented by the agent coordinates of the current time step and K possible future trajectories. For each agent type c, calculate the prediction accuracy between Scˆ and Sc. Define the cost between the prediction agent and the truth agent as:
The EPA between Scˆ and Sc is defined as:
The experimental results are as follows:
##Note: This target rendering does Not bad.The above is the detailed content of ViP3D: End-to-end visual trajectory prediction through 3D agent query. For more information, please follow other related articles on the PHP Chinese website!