Home  >  Article  >  Technology peripherals  >  Exciting! A preliminary study of GPT-4V in autonomous driving

Exciting! A preliminary study of GPT-4V in autonomous driving

王林
王林forward
2023-10-19 11:21:14667browse

Update: Added a new example, the self-driving delivery vehicle drove into the Xinpu cement floor

Under the spotlight, GPT4 finally launched vision-related functions today. This afternoon I quickly tested GPT's image perception capabilities with my friends. Although we had expectations, we were still greatly shocked. TL;DR isI think that the semantic-related issues in autonomous driving should have been solved very well by large models, but the credibility and spatial perception capabilities of large models are still not satisfactory. It should be more than enough to solve some so-called efficiency-related corner cases, but it is still very far away to completely rely on large models to independently complete driving and ensure safety.

1 Example1: Some unknown obstacles appeared on the road

Exciting! A preliminary study of GPT-4V in autonomous driving

Exciting! A preliminary study of GPT-4V in autonomous driving

GPT4's description

Accurate part: 3 trucks were detected, the license plate number of the front truck is basically correct (ignore if there are Chinese characters), the weather and environment are correct, accurate without prompts Unknown obstacles ahead were identified

Inaccurate part: The position of the third truck is indistinguishable from left to right, and the text on the top of the second truck is a random guess (because of insufficient resolution?)

This is not enough, let us continue to give a little hint to ask what this object is and whether it can be pressed over.

Exciting! A preliminary study of GPT-4V in autonomous driving

Impressive! We have tested multiple similar scenarios, and the performance on unknown obstacles can be said to be very amazing.

2 Example2: Understanding road water accumulation

Exciting! A preliminary study of GPT-4V in autonomous driving

There is no prompt to automatically recognize the sign. This should It was gay and we continued to give some hints

Exciting! A preliminary study of GPT-4V in autonomous driving

We were shocked again. . . He could automatically tell the fog behind the truck and also mentioned the puddle, but once again said the direction was to the left. . . I feel that some prompt engineering may be needed here to better enable GPT to output the position and direction.

3 Example3: A vehicle turned around and hit the guardrail directly

Exciting! A preliminary study of GPT-4V in autonomous driving

Enter the first frame , because there is no timing information, the truck on the right is just regarded as parked. So here’s another frame:

Exciting! A preliminary study of GPT-4V in autonomous driving

can be said automatically. These two broke through the guardrail and hovered at the edge of the highway. It’s great. . . But instead the road signs that looked easier were wrong. . . All I can say is that this is a huge model. It will always shock you and you never know when it will make you cry. . . Another frame:

Exciting! A preliminary study of GPT-4V in autonomous driving

#This time, it talks directly about the debris on the road, and I admire it again. . . But once I named the arrow on the road wrong. . . Generally speaking, the information that needs special attention in this scene is covered. For issues like road signs, the flaws are not hidden

4 Example4: Let’s have a funny

Exciting! A preliminary study of GPT-4V in autonomous driving

It can only be said that it is very accurate. Compared with the previous cases that seemed extremely difficult, such as "someone waved his hand at you", it is like pediatrics. The semantics The corner case above can be solved.

5 Example5 Let’s have a famous scene. . . The delivery truck mistakenly entered the newly constructed road

Exciting! A preliminary study of GPT-4V in autonomous driving

Exciting! A preliminary study of GPT-4V in autonomous driving

Exciting! A preliminary study of GPT-4V in autonomous driving

Exciting! A preliminary study of GPT-4V in autonomous driving##

I was relatively conservative at the beginning and did not directly guess the reason. I gave a variety of guesses. This is in line with the goal of alignment. After using CoT, it was discovered that the problem was that the car was not understood to be a self-driving vehicle, so giving this information through prompt can give more accurate information. Finally, through a bunch of prompts, the conclusion can be output that the newly laid asphalt is not suitable for driving. The final result is still OK, but the process is more tortuous and requires more prompt engineering and careful design. This reason may also be because it is not a first-person perspective picture and can only be speculated from a third-person perspective. So this example is not very precise.

6 Summary

Some quick attempts have fully proved the power and generalization performance of GPT4V. Appropriate prompts should be able to fully demonstrate The strength of GPT4V. Solving the semantic corner case should be very promising, but the problem of illusion will still plague some applications in security-related scenarios. Very exciting. I personally think that the rational use of such large models can greatly accelerate the development of L4 and even L5 autonomous driving. However, does LLM have to drive directly? End-to-end driving, in particular, remains a debatable issue. I have been thinking a lot recently, so I will find time to write an article and chat with you all~

Exciting! A preliminary study of GPT-4V in autonomous driving

Original link: https://mp.weixin.qq.com/s/RtEek6HadErxXLSdtsMWHQ

The above is the detailed content of Exciting! A preliminary study of GPT-4V in autonomous driving. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete