Home  >  Article  >  Technology peripherals  >  Let’s talk about several large models and autonomous driving concepts that have become popular recently.

Let’s talk about several large models and autonomous driving concepts that have become popular recently.

WBOY
WBOYforward
2023-11-09 11:13:161196browse

Recently, various applications of large models are still popular. Around the beginning of October, a series of rather gimmicky articles appeared, trying to apply large models to autonomous driving. I have been talking about a lot of related topics with many friends recently. When writing this article, on the one hand, I discovered that including myself, we have actually confused some very related but actually different concepts in the past. On the other hand, it is an extension of these concepts. There are some interesting thoughts that are worth sharing and discussing with everyone.

Large (Language) Model

This is undoubtedly the most popular direction at present, and it is also the focus of the most concentrated papers. How can large language models help autonomous driving? On the one hand, like GPT-4V, it provides extremely powerful semantic understanding capabilities through alignment with images, which will not be mentioned here for the time being; on the other hand, it uses LLM as an agent to directly implement driving behavior. The latter is actually the most sexy research direction at present, and is inextricably linked to the series of work on embedded AI.

Most of the latter type of work seen so far uses LLM: 1) directly used 2) fine-tuned through supervised learning 3) fine-tuned through reinforcement learning for driving tasks. In essence, there is no escape from the previous paradigm framework of driving based on learning methods. In fact, a very direct question is, why might it be better to use LLM to do this? Intuitively speaking, using words to drive is an inefficient and verbose thing. Then one day I suddenly figured out that LLM actually implements a pretrain for the agent through language! One of the important reasons why RL was difficult to generalize before was that it was difficult to unify various tasks and use various common data to pretrain. Each task could only be trained from scratch, but LLM is very good. Solved this problem. But in fact, there are several problems that are not well solved: 1) After completing pretrain, must the language be retained as the output interface? This actually brings a lot of inconvenience to many tasks, and also causes redundant calculations to a certain extent. 2) The approach of LLM as agent still does not overcome the essential problems of the existing RL model free method, and all the problems of model free methods still exist. Recently, we have also seen some attempts at model based LLM as agent, which may be an interesting direction.

The last thing I want to complain about in each paper is: It’s not just connecting to LLM and letting LLM output a reason to make your model interpretable. This reason may still be nonsense. . . Things that were not guaranteed before will not become guaranteed just because a sentence is output.

Large (Visual) Model

In fact, the purely large visual model still has not seen that magical "emergence" moment. When talking about large visual models, there are generally two possible references: one is a super visual information feature extractor based on massive web data pre-training such as CLIP or DINO or SAM, which greatly improves the semantic understanding ability of the model; The other refers to the joint model of pairs (image, action, etc...) implemented by the world model represented by GAIA.

In fact, I think the former is just the result of continuing linear scale up along the traditional thinking. It is difficult to see the possibility of changing the amount of autonomous driving at present. In fact, the latter has continuously entered the field of vision of researchers due to the continuous publicity of Wayve and Tesla this year. When people talk about world models, they often include the fact that the model is end-to-end (directly outputs actions) and is related to LLM. In fact, this assumption is one-sided. My understanding of the world model is also very limited. I would like to recommend Lecun’s interview and @Yu Yang’s model based RL survey, which I will not expand on:

Yu Yang: About the environment model (world model) learning
https://www.php.cn/link/a2cdd86a458242d42a17c2bf4feff069

Pure visual autonomous driving

This is actually It is easy to understand that it refers to an autonomous driving system that relies only on visual sensors. This is actually the best and ultimate wish of autonomous driving: to drive with a pair of eyes like a human being. Such concepts are generally associated with the above two large models, because the complex semantics of images require strong abstraction capabilities to extract useful information. Under Tesla's recent continuous publicity offensive, this concept also overlaps with the end-to-end mentioned below. But in fact, there are many ways to achieve pure visual driving, and end-to-end is naturally one of them, but it is not the only one. The most difficult problem in realizing purely visual autonomous driving is that vision is inherently insensitive to 3D information, and large models have not essentially changed this. Specifically reflected in: 1) The way of passively receiving electromagnetic waves makes vision unlike other sensors that can measure geometric information in 3D space; 2) Perspective makes distant objects extremely sensitive to errors. This is very unfriendly for downstream planning and control, which are implemented in an equal-error 3D space by default. However, is driving by vision the same as being able to accurately estimate 3D distance and speed? I think this is a representation issue worthy of in-depth study in pure visual autonomous driving in addition to semantic understanding.

End-to-end automatic driving

This concept refers to the control signal from the sensor to the final output (in fact, I think it can also be broadly included to the waypoints of the upstream layer planning information) using a jointly optimized model. This can either be a direct end-to-end method that inputs sensor data like ALVINN as early as the 1980s and outputs control signals directly through a neural network, or it can be a staged end-to-end method like this year's CVPR best paper UniAD. However, a common point of these methods is that the downstream supervision signal can be directly passed to the upstream, instead of each module having its own self-defined optimization goals. Overall, this is a correct idea. After all, deep learning relies on such joint optimization to make its fortune. However, for systems such as autonomous driving or general-purpose robots, which are often extremely complex and deal with the physical world, there are many problems that need to be overcome in terms of engineering implementation and data organization and utilization efficiency.

Feed-Forward End-to-End Autonomous Driving

This concept seems to be rarely mentioned, but in fact I find that the existence of end-to-end itself is valuable. But the problem lies in the way of using Feed-Forward to observe. Including me, in fact, I have always defaulted that end-to-end driving must be in the form of Feed-Forward, because 99% of current deep learning-based methods assume such a structure, which means that the final output of concern (such as control signals )u = f(x), x is the various observations of the sensor. Here f can be a very complex function. But in fact, in some problems, we hope to make the final output satisfy or be close to certain properties, so it is difficult for the Feed-Forward form to give such a guarantee. So there is another way we can write u* = argmin g(u, x) s.t. h(u, x)

With the development of large models, this direct Feed-Forward end-to-end autonomous driving solution has ushered in a wave of revival. Of course, large models are very powerful, but I raise a question and hope everyone will think about it: If the large model is omnipotent end-to-end, does that mean that the large model should be able to play Go/Gobang end-to-end? Woolen cloth? Paradigms like AlphaGo should be meaningless? I believe everyone knows that the answer is no. Of course, this Feed-Forward method can be used as a fast approximate solver and achieve good results in most scenarios.

Judging from the current plans of various companies that have disclosed their use of Neural Planner, the neural part only provides a number of initialization proposals for subsequent optimization plans to alleviate the problem of highly non-convexity in subsequent optimization. This is essentially the same thing as fast rollout in AlphaGo. But AlphaGo will not call the subsequent MCTS search a "cover-up" solution. . .

Finally, I hope this can help everyone clarify the differences and connections between these concepts, and that everyone can clearly understand what they are talking about when discussing issues. . .

Let’s talk about several large models and autonomous driving concepts that have become popular recently.

Original link: https://mp.weixin.qq.com/s/_OjgT1ebIJXM8_vlLm0v_A

The above is the detailed content of Let’s talk about several large models and autonomous driving concepts that have become popular recently.. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete