Home  >  Article  >  Technology peripherals  >  Trajectory Prediction Series | What does the evolved version of HiVT QCNet talk about?

Trajectory Prediction Series | What does the evolved version of HiVT QCNet talk about?

WBOY
WBOYforward
2024-04-12 18:28:21759browse

The evolved version of HiVT (you can read this article directly without reading HiVT first), with greatly improved performance and efficiency.

The article is also easy to read.

[Trajectory Prediction Series][Notes] HiVT: Hierarchical Vector Transformer for Multi-Agent Motion Prediction - Zhihu (zhihu.com)

Original link:

https: //openaccess.thecvf.com/content/CVPR2023/papers/Zhou_Query-Centric_Trajectory_Prediction_CVPR_2023_paper.pdf

Abstract

There is a problem in the model where Agent serves as the center for prediction. When the window is moved, it is necessary to repeat normalize to the agent center multiple times, and then repeat the encoding process, which is not cost-effective for onboard use. Therefore, we use a query-centric framework for scene encoding, which can reuse calculated results and does not rely on the global time coordinate system. At the same time, because different agents share scene features, the agent's trajectory decoding process can be processed more in parallel.

The scene is complexly encoded, and the current decoding method is still difficult to capture mode information, especially for long-term predictions. In order to solve this problem, we first use the anchor-free query to generate a trajectory proposal (a step-by-step feature extraction method), so that the model can better utilize the scene features at different times. Then there is the adjustment module, which uses the proposal obtained in the previous step to optimize the trajectory (dynamic anchor-based). Through these high-quality anchors, our query-based decoder can better handle the characteristics of the mode.

Successfully ranked. This design also implements scenario feature encoding and parallel multi-agent decoding pipelines.

Introduction

The current trajectory prediction paper has the following problems:

  1. For a variety of heterogeneous scene information The credit processing efficiency is low. In unmanned driving tasks, data is streamed to the model frame by frame, including vectorized high-precision maps and historical trajectories of surrounding agents. The recent factorized attention method (separate attention in space and time) has raised the processing of this information to a new level. But this requires attention for each scene element. If the scene is very complex, the cost is still very high.
  2. As the time of the forecast increases, the uncertainty of the forecast also explodes. For example, a car at an intersection may go straight or turn. In order to avoid missing potential possibilities, the model needs to obtain the distribution of multiple modes instead of just predicting the mode with the highest frequency. But there is only one gt, and it is impossible to perform better learning on multiple possibilities. Some papers propose the method of using multiple hand-held anchors for supervision. This effect depends entirely on the quality of the anchors. This approach is very bad when the anchor cannot accurately cover GT. There are also other approaches to directly predict multiple modes, ignoring the problems of mode collapse and training instability.

In order to solve the above problems, we proposed QCNet.

First of all, we want to improve the inference speed of onboard while making good use of the powerful factorized attention. The agent-centric encoding method of the past obviously does not work. When the next frame of data arrives, the window will move, but there will still be a large overlap with the previous frame, so we have the opportunity to reuse these features. However, the agent-centric method needs to be transferred to the agent coordinate system, causing it to re-encode the scene. In order to solve this problem, we used the query-centric method: scene elements extract features within their own spatio-temporal coordinate system, regardless of the global coordinate system (it does not matter where the ego is). (High-precision maps can be used because map elements have long-term IDs. Non-high-precision maps may not be easy to use. Map elements must be tracked in the front and rear frames.)

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?

This allows us to reuse the previously processed encoding results. For the agent, these cache features can be used directly, thus saving latency.

Secondly, in order to better use these scene encoding results for multi-mode long-term prediction, we use anchor-free query to extract the features of the scene step by step (at the previous position), so that each time Each decode is a very short step. This approach allows the feature extraction of the scene to focus on a certain location of the agent in the future, rather than extracting distant features in order to consider the location of multiple moments in the future. The high-quality anchor obtained in this way will be finely adjusted in the next refine module. In this way, the combination of anchor-free and anchor-based makes full use of the advantages of the two methods to achieve multi-mode and long-term predictions.

This approach is the first to explore the continuity of trajectory prediction to achieve high-speed inference. At the same time, the decoder part also takes into account multi-mode and long-term prediction tasks.

Approach

Input and Output

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?

At the same time, the prediction module can also obtain M polygons from the high-precision map. Each polygon has multiple points and semantic information (crosswalk, lane, etc.).

The prediction module uses the above agent state and map information at T moments to give K predicted trajectories with a total length of T', as well as its probability distribution.

Query-Centric Scene Context Encoding

The first step is naturally the encoding of the scene. The currently popular factorized attention (separate attention for time and space dimensions) is done in this way. Specifically, there are three steps:

  1. Time dimension attention, time complexity O(A), each agent’s own time dimension matrix multiplication
  2. cross attention of agent and map, time complexity O(ATM), at each moment, matrix multiplication of agent and map elements
  3. Attention between agent and agent, time complexity O(T), at each moment, agent and agent matrix multiplication

This approach is the same as the previous one in the time dimension Compared with the method of compressing the feature to the current moment, and then interacting between agent and agent, agent and map, the interaction is done for each moment in the past, so more information can be obtained, such as the evolution of the interaction between agent and map at each moment. .

But the disadvantage is that the cubic complexity will become very large as the scene becomes more complex and the elements increase. Our goal is to make good use of this factorized attention without letting the time complexity explode so easily.

An easy way to think of is to use the result of the previous frame, because in the time dimension, there are actually T-1 frames that are completely repeated. But because we need to rotate and translate these features to the position and orientation of the agent's current frame, we cannot just use the results of the previous frame operation.

In order to solve the problem of coordinate systems, a query-centric approach is adopted to learn the characteristics of scene elements without relying on their global coordinates. This approach establishes a local spatio-temporal coordinate system for each scene element, and extracts features within this coordinate system. Even if the ego goes elsewhere, the locally extracted features will remain unchanged. This local spatio-temporal coordinate system naturally has an origin position and direction. This position information is used as the key, and the extracted features are used as the value to facilitate subsequent attention operations. The entire approach is divided into the following steps:

Local Spacetime Coordinate System

For the feature of agent i at time t, select the position and orientation at this time as a reference Tie. For map elements, the starting point of this element is used as the reference frame. Such a reference system selection method can keep the extracted features unchanged after the ego moves.

Scene Element Embedding

For other vector features within each element, polar coordinate representations are obtained in the above reference system. They are then converted into Fourier features to obtain high-frequency signals. After concating the semantic features, MLP obtains the features. For map elements, in order to ensure that the order of internal points is not relevant, attention is first performed and then pooling is performed. Finally, the agent features are [A, T, D], and the map features are [M, D]. D is the feature dimension. Only by keeping it consistent can the matrix multiplication of attention be facilitated. The features extracted in this way can make ego usable anywhere.

Fourier embedding: Create a normal distributed embedding, corresponding to the weights of various frequencies, multiply the input sum by 2Π, and finally take cos and sin as features. An intuitive understanding should be to treat the input as a signal and decode the signal into multiple basic signals (signals of multiple frequencies). This can better capture high-frequency signals. High-frequency signals are very important for the precision of the results. General methods can easily lose fine high-frequency signals. It is worth noting that it is not recommended to use noisy data because it will mistakenly capture the wrong high-frequency signal. (It feels a bit like overfit, not too general but not too precise)

Relative Spatial-Temporal Positional Embedding

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?

Self-Attention for Map Encoding

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?

##Factorized Attention for Agent Encoding

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?

Nearby is defined as within 50m around the agent. A total of times will be performed.

It is worth noting that the features obtained through the above method have spatiotemporal invariance, that is, no matter where the ego goes at any time, the above features are unchanged because there is no translation or rotation based on the current position information. Since there is only a new frame of data compared to the previous frame, there is no need to calculate the features of the previous moment, so the total computational complexity is divided by T.

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?

Query-Based Trajectory Decoding

A method similar to DETR's anchor-free query to do attention in certain key values It will lead to unstable training and mode collapse problems. At the same time, long-term predictions are unreliable because the uncertainty will explode at a later time. Therefore, this model uses a rough anchor-free query method first, and then refines the anchor-base method for this output.

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?The entire network structure

Mode2Scene and Mode2Mode Attention

Both steps of Mode2Scene adopt the DETR structure: query is K trajectory mode (The coarse proposal step is directly randomly generated, and the refine step uses the feature obtained from the proposal step as input), and then performs multiple cross attention on the scene features (agent history, map, surrounding agents).

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?DETR structure

Mode2Mode performs self attention among K modes in an attempt to realize the diversity between modes and not to gather them all together.

Reference Frames of Mode Queries

In order to predict the trajectories of multiple agents in parallel, the encoding of the scene is shared by multiple agents. Because the scene features are all features relative to itself, you still have to switch to the agent's perspective if you want to use them. For the mode query, the location and orientation information of the agent will be appended. Similar to the previous operation of encoding the relative position, the relative position information of the scene element and the agent will also be embedding as the key and value. (Intuitively speaking, it is a weighted attention of each mode of the agent on the use of nearby information)

Anchor-Free Trajectory Proposal

The first time is anchor free The method is to use a learnable query to create relatively low-quality trajectory proposals, and a total of K proposals will be generated. Since cross attention is used to extract features from scene information, relatively small and effective anchors can be efficiently generated for use in the second refine. Self attention makes each proposal more diverse overall.

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?

Anchor-Based Trajectory Refinement

Although the anchor free method is relatively simple, it also has the problem of unstable training. Maybe mode collapse. At the same time, the randomly generated mode also needs to be able to perform well for different agents in the entire scene. This is relatively difficult, and it is easy to generate trajectory proposals that are not in line with kinematics or traffic. So we thought of doing another anchor-based correction. An offset is predicted based on the proposal (added to the original proposal to obtain the revised trajectory), and the probability of each new trajectory is predicted.

This module also uses the form of DETR. The query of each mode is extracted using the proposal of the previous step. Specifically, a small GRU is used to embed each anchor (step forward) , use the feature up to the last moment as the query. These anchor-based queries can provide certain spatial information, making it easier to capture useful information during attention.

Training Objectives

The same as HiVT (refer to HiVT analysis), using Laplace distribution. To put it bluntly, each moment in each mode is modeled as a Laplace distribution (refer to the general Gaussian distribution, where mean and var represent the position of this point and its uncertainty). And the moments are considered to be independent (directly multiplied). Π represents the probability of the corresponding mode.

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?

Loss is composed of 3 parts

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?

##It is mainly divided into two parts:

Classification loss And return loss.

Classification loss refers to the loss of predicted probability. It should be noted here that it is necessary to

interrupt the gradient return, and the gradient caused by probability cannot be transmitted to the prediction of coordinates (i.e. Under the premise that the predicted position of each mode is reasonable). The label closest to gt is 1, and the others are 0.

There are two regression losses, one is the loss of the first-stage proposal, and the other is the loss of the second-stage refine. A winner-take-all approach is adopted, that is, only the loss of the mode closest to gt is calculated, and the regression losses of both stages are calculated. For the stability of training, the gradient return is also interrupted in the two stages, so that proposal learning only learns proposal, and refine only learns refine.

Experiments

Argoverse2 basic SOTA (* indicates the use of ensemble techniques)

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?b-minFDE The difference compared to minFDE is that it is multiplied by an additional coefficient related to its probability. That is, the target hopes that the probability of the trajectory with the smallest FDE is as high as possible.

Regarding the ensemble technique, I feel it is a bit cheating: you can refer to the introduction in BANet, which is briefly introduced below.

The last step of generating the trajectory is to connect multiple submodels (decoder) with the same structure at the same time, which will give multiple sets of predictions. For example, there are 7 submodels, each with 6 predictions, 42 in total. Then use kmeans to perform clustering (using the last coordinate point as the clustering standard). The target is 6 groups, 7 items in each group, and then perform a weighted average in each group to obtain a new trajectory.

The weighting method is as follows. It is the b-minFDE of the current trajectory and gt, and c is the probability of the current trajectory. The weight is calculated in each group, and then the trajectory coordinates are weighted and summed to obtain a new trajectory. (It feels a bit tricky, because c is actually the probability of this trajectory in the submodel output, which is a bit inconsistent with expectations when used in clustering)

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?And operate like this The probability of the new trajectory is also difficult to calculate accurately, and the above method cannot be used, otherwise the total probability sum will not necessarily be 1. It seems that we can only calculate the probabilities in clusters with equal weights.

Argoverse1 is also far ahead

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥? Research on scene encoding: If the previous scene encoding results are reused, the infer time can be greatly reduced. The number of factorized attention interactions between the agent and the scene information increases, and the prediction effect will also become better, but the latency also increases sharply, which needs to be weighed.

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥? Research on various operations: Proving the importance of refine and the importance of factorized attention in various interactions, both are indispensable.

轨迹预测系列 | HiVT之进化版QCNet到底讲了啥?

The above is the detailed content of Trajectory Prediction Series | What does the evolved version of HiVT QCNet talk about?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete