Home  >  Article  >  Technology peripherals  >  OpenAI and Google have made big moves in two consecutive days, both of which want to make AI assistants "smart"

OpenAI and Google have made big moves in two consecutive days, both of which want to make AI assistants "smart"

WBOY
WBOYOriginal
2024-06-03 15:23:07253browse

After seeing the spring release of OpenAI yesterday, it is not difficult to guess that at today’s Google I/O conference, there will definitely be an introduction to AI assistants.

After all, Altman, who released GPT-4o before the Google I/O conference, has shown full pertinence. With Altman’s methods, he is naturally confident to achieve precise strikes and target this The "red and blue confrontation" continues to the end.

Sure enough, at the conference, Google CEO Pichai invited DeepMind founder Demis. Google’s new AI assistant Project Astra was unveiled by Demis, who made his debut at the Google I/O conference.

OpenAI and Google have made big moves in two consecutive days, both of which want to make AI assistants smart

What is Project Astra?

Project Astra is a real-time, multi-modal general artificial intelligence assistant with Google Gemini as the basic engine, which is equivalent to the successor of Google Assistant.

The same as Siri, Alexa and other AI assistants we used in the past, you can still interact with it using voice; the difference is that with the unique characteristics of generative AI, it now has better It has the ability to understand and do more things, and more importantly, this time it also has visual recognition capabilities, allowing the AI ​​assistant to open its eyes and see the world.

In Google’s video demonstration at the conference, it was the visual intelligence of this AI assistant that was highlighted.

In the demonstration video, a Google engineer holds a mobile phone with the camera turned on, allowing Gemini to identify objects that make sounds in the space, identify codes displayed on the monitor screen, and even identify the presenter through outdoor street views. Current address.

OpenAI and Google have made big moves in two consecutive days, both of which want to make AI assistants smart

##In addition to applying AI assistant to mobile phones, Google has also applied AI assistant to AR glasses. When this When engineers point the smart glasses equipped with this AI assistant at the system designed on the blackboard and propose how to improve the system, the AI ​​assistant can even give suggestions for improving the system design.

OpenAI and Google have made big moves in two consecutive days, both of which want to make AI assistants smart

This is the visual intelligence displayed by Google on the AI ​​assistant. With the blessing of Gemini, such an AI assistant can Interaction capabilities have been greatly enhanced.

However, in terms of the naturalness of actual interaction, such an AI assistant still lags far behind the results demonstrated by OpenAI GPT-4o yesterday.

OpenAI successfully "cut off"

Just the day before the Google I/O conference, OpenAI held a massive spring conference. GPT-4o was The protagonist of Chunfa this time, the AI ​​assistant deployed on the mobile phone is the key feature demonstrated at this conference.

Judging from the capabilities of the AI ​​assistant demonstrated at the OpenAI conference, the demonstration effect must be better in terms of the user-friendliness of the demonstration content, the naturalness of the interaction process, and the multi-modal capabilities of the AI ​​assistant. good.

This is because when OpenAI loaded GPT-4o onto the mobile phone, it not only added visual intelligence, but also enabled the AI ​​assistant to respond in real time (

The official average response delay is 320 milliseconds) , can be interrupted at any time, and can even understand human emotions.

During the demonstration of visual intelligence capabilities, OpenAI wrote a mathematical equation on paper for the AI ​​assistant to solve the problem step by step, much like a primary school teacher.

OpenAI and Google have made big moves in two consecutive days, both of which want to make AI assistants smart

##And when you make a "video call" with GPT-4o, it can recognize your facial expressions, Understand your emotions, know whether you look happy or sad right now, just like a human being.

OpenAI and Google have made big moves in two consecutive days, both of which want to make AI assistants smart

It is not difficult to find that with the support of today’s large model technology, both Google and OpenAI are trying to turn the original rough The AI ​​assistant is being reinvented, hoping that the AI ​​assistant can interact with us naturally like real people.

Judging from the video demonstration results of the two press conferences, the AI ​​assistant with large models as the basic engine has indeed made us clearly feel that the current AI assistant is significantly different from the previous Siri and Alexa. generational differences.

In fact, with the development of generative AI and large model technology in full swing, Apple is also trying to reinvent Siri. Previously, Bloomberg reported that Apple was in talks with OpenAI and Google respectively, citing people familiar with the matter. Cooperate and want to use their large model in the iOS 18 operating system.

As for whether such an AI assistant can make Siri popular again and become a killer application for AI phones, it depends on whether Apple can successfully "enchant" the AI ​​assistant again.

The above is the detailed content of OpenAI and Google have made big moves in two consecutive days, both of which want to make AI assistants "smart". For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn