search
HomeTechnology peripheralsAILet's talk about several large models and autonomous driving concepts that have become popular recently.

Recently, various applications of large models are still popular. Around the beginning of October, a series of rather gimmicky articles appeared, trying to apply large models to autonomous driving. I have been talking about a lot of related topics with many friends recently. When writing this article, on the one hand, I discovered that including myself, we have actually confused some very related but actually different concepts in the past. On the other hand, it is an extension of these concepts. There are some interesting thoughts that are worth sharing and discussing with everyone.

Large (Language) Model

This is undoubtedly the most popular direction at present, and it is also the focus of the most concentrated papers. How can large language models help autonomous driving? On the one hand, like GPT-4V, it provides extremely powerful semantic understanding capabilities through alignment with images, which will not be mentioned here for the time being; on the other hand, it uses LLM as an agent to directly implement driving behavior. The latter is actually the most sexy research direction at present, and is inextricably linked to the series of work on embedded AI.

Most of the latter type of work seen so far uses LLM: 1) directly used 2) fine-tuned through supervised learning 3) fine-tuned through reinforcement learning for driving tasks. In essence, there is no escape from the previous paradigm framework of driving based on learning methods. In fact, a very direct question is, why might it be better to use LLM to do this? Intuitively speaking, using words to drive is an inefficient and verbose thing. Then one day I suddenly figured out that LLM actually implements a pretrain for the agent through language! One of the important reasons why RL was difficult to generalize before was that it was difficult to unify various tasks and use various common data to pretrain. Each task could only be trained from scratch, but LLM is very good. Solved this problem. But in fact, there are several problems that are not well solved: 1) After completing pretrain, must the language be retained as the output interface? This actually brings a lot of inconvenience to many tasks, and also causes redundant calculations to a certain extent. 2) The approach of LLM as agent still does not overcome the essential problems of the existing RL model free method, and all the problems of model free methods still exist. Recently, we have also seen some attempts at model based LLM as agent, which may be an interesting direction.

The last thing I want to complain about in each paper is: It’s not just connecting to LLM and letting LLM output a reason to make your model interpretable. This reason may still be nonsense. . . Things that were not guaranteed before will not become guaranteed just because a sentence is output.

Large (Visual) Model

In fact, the purely large visual model still has not seen that magical "emergence" moment. When talking about large visual models, there are generally two possible references: one is a super visual information feature extractor based on massive web data pre-training such as CLIP or DINO or SAM, which greatly improves the semantic understanding ability of the model; The other refers to the joint model of pairs (image, action, etc...) implemented by the world model represented by GAIA.

In fact, I think the former is just the result of continuing linear scale up along the traditional thinking. It is difficult to see the possibility of changing the amount of autonomous driving at present. In fact, the latter has continuously entered the field of vision of researchers due to the continuous publicity of Wayve and Tesla this year. When people talk about world models, they often include the fact that the model is end-to-end (directly outputs actions) and is related to LLM. In fact, this assumption is one-sided. My understanding of the world model is also very limited. I would like to recommend Lecun’s interview and @Yu Yang’s model based RL survey, which I will not expand on:

Yu Yang: About the environment model (world model) learning
https://www.php.cn/link/a2cdd86a458242d42a17c2bf4feff069

Pure visual autonomous driving

This is actually It is easy to understand that it refers to an autonomous driving system that relies only on visual sensors. This is actually the best and ultimate wish of autonomous driving: to drive with a pair of eyes like a human being. Such concepts are generally associated with the above two large models, because the complex semantics of images require strong abstraction capabilities to extract useful information. Under Tesla's recent continuous publicity offensive, this concept also overlaps with the end-to-end mentioned below. But in fact, there are many ways to achieve pure visual driving, and end-to-end is naturally one of them, but it is not the only one. The most difficult problem in realizing purely visual autonomous driving is that vision is inherently insensitive to 3D information, and large models have not essentially changed this. Specifically reflected in: 1) The way of passively receiving electromagnetic waves makes vision unlike other sensors that can measure geometric information in 3D space; 2) Perspective makes distant objects extremely sensitive to errors. This is very unfriendly for downstream planning and control, which are implemented in an equal-error 3D space by default. However, is driving by vision the same as being able to accurately estimate 3D distance and speed? I think this is a representation issue worthy of in-depth study in pure visual autonomous driving in addition to semantic understanding.

End-to-end automatic driving

This concept refers to the control signal from the sensor to the final output (in fact, I think it can also be broadly included to the waypoints of the upstream layer planning information) using a jointly optimized model. This can either be a direct end-to-end method that inputs sensor data like ALVINN as early as the 1980s and outputs control signals directly through a neural network, or it can be a staged end-to-end method like this year's CVPR best paper UniAD. However, a common point of these methods is that the downstream supervision signal can be directly passed to the upstream, instead of each module having its own self-defined optimization goals. Overall, this is a correct idea. After all, deep learning relies on such joint optimization to make its fortune. However, for systems such as autonomous driving or general-purpose robots, which are often extremely complex and deal with the physical world, there are many problems that need to be overcome in terms of engineering implementation and data organization and utilization efficiency.

Feed-Forward End-to-End Autonomous Driving

This concept seems to be rarely mentioned, but in fact I find that the existence of end-to-end itself is valuable. But the problem lies in the way of using Feed-Forward to observe. Including me, in fact, I have always defaulted that end-to-end driving must be in the form of Feed-Forward, because 99% of current deep learning-based methods assume such a structure, which means that the final output of concern (such as control signals )u = f(x), x is the various observations of the sensor. Here f can be a very complex function. But in fact, in some problems, we hope to make the final output satisfy or be close to certain properties, so it is difficult for the Feed-Forward form to give such a guarantee. So there is another way we can write u* = argmin g(u, x) s.t. h(u, x)

With the development of large models, this direct Feed-Forward end-to-end autonomous driving solution has ushered in a wave of revival. Of course, large models are very powerful, but I raise a question and hope everyone will think about it: If the large model is omnipotent end-to-end, does that mean that the large model should be able to play Go/Gobang end-to-end? Woolen cloth? Paradigms like AlphaGo should be meaningless? I believe everyone knows that the answer is no. Of course, this Feed-Forward method can be used as a fast approximate solver and achieve good results in most scenarios.

Judging from the current plans of various companies that have disclosed their use of Neural Planner, the neural part only provides a number of initialization proposals for subsequent optimization plans to alleviate the problem of highly non-convexity in subsequent optimization. This is essentially the same thing as fast rollout in AlphaGo. But AlphaGo will not call the subsequent MCTS search a "cover-up" solution. . .

Finally, I hope this can help everyone clarify the differences and connections between these concepts, and that everyone can clearly understand what they are talking about when discussing issues. . .

Lets talk about several large models and autonomous driving concepts that have become popular recently.

Original link: https://mp.weixin.qq.com/s/_OjgT1ebIJXM8_vlLm0v_A

The above is the detailed content of Let's talk about several large models and autonomous driving concepts that have become popular recently.. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
AI Therapists Are Here: 14 Groundbreaking Mental Health Tools You Need To KnowAI Therapists Are Here: 14 Groundbreaking Mental Health Tools You Need To KnowApr 30, 2025 am 11:17 AM

While it can’t provide the human connection and intuition of a trained therapist, research has shown that many people are comfortable sharing their worries and concerns with relatively faceless and anonymous AI bots. Whether this is always a good i

Calling AI To The Grocery AisleCalling AI To The Grocery AisleApr 30, 2025 am 11:16 AM

Artificial intelligence (AI), a technology decades in the making, is revolutionizing the food retail industry. From large-scale efficiency gains and cost reductions to streamlined processes across various business functions, AI's impact is undeniabl

Getting Pep Talks From Generative AI To Lift Your SpiritGetting Pep Talks From Generative AI To Lift Your SpiritApr 30, 2025 am 11:15 AM

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here). In addition, for my comp

Why AI-Powered Hyper-Personalization Is A Must For All BusinessesWhy AI-Powered Hyper-Personalization Is A Must For All BusinessesApr 30, 2025 am 11:14 AM

Maintaining a professional image requires occasional wardrobe updates. While online shopping is convenient, it lacks the certainty of in-person try-ons. My solution? AI-powered personalization. I envision an AI assistant curating clothing selecti

Forget Duolingo: Google Translate's New AI Feature Teaches LanguagesForget Duolingo: Google Translate's New AI Feature Teaches LanguagesApr 30, 2025 am 11:13 AM

Google Translate adds language learning function According to Android Authority, app expert AssembleDebug has found that the latest version of the Google Translate app contains a new "practice" mode of testing code designed to help users improve their language skills through personalized activities. This feature is currently invisible to users, but AssembleDebug is able to partially activate it and view some of its new user interface elements. When activated, the feature adds a new Graduation Cap icon at the bottom of the screen marked with a "Beta" badge indicating that the "Practice" feature will be released initially in experimental form. The related pop-up prompt shows "Practice the activities tailored for you!", which means Google will generate customized

They're Making TCP/IP For AI, And It's Called NANDAThey're Making TCP/IP For AI, And It's Called NANDAApr 30, 2025 am 11:12 AM

MIT researchers are developing NANDA, a groundbreaking web protocol designed for AI agents. Short for Networked Agents and Decentralized AI, NANDA builds upon Anthropic's Model Context Protocol (MCP) by adding internet capabilities, enabling AI agen

The Prompt: Deepfake Detection Is A Booming BusinessThe Prompt: Deepfake Detection Is A Booming BusinessApr 30, 2025 am 11:11 AM

Meta's Latest Venture: An AI App to Rival ChatGPT Meta, the parent company of Facebook, Instagram, WhatsApp, and Threads, is launching a new AI-powered application. This standalone app, Meta AI, aims to compete directly with OpenAI's ChatGPT. Lever

The Next Two Years In AI Cybersecurity For Business LeadersThe Next Two Years In AI Cybersecurity For Business LeadersApr 30, 2025 am 11:10 AM

Navigating the Rising Tide of AI Cyber Attacks Recently, Jason Clinton, CISO for Anthropic, underscored the emerging risks tied to non-human identities—as machine-to-machine communication proliferates, safeguarding these "identities" become

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool