Home  >  Article  >  Technology peripherals  >  Google releases multi-modal Bard assistant: another milestone towards the era of interactive AI

Google releases multi-modal Bard assistant: another milestone towards the era of interactive AI

WBOY
WBOYforward
2023-10-06 17:33:031173browse

At a new product launch conference a few days ago, Google officially released the new generation of Android flagship phone Pixel 8/Pro series, equipped with Tensor G3 chip. This chip can run more complex ML (machine learning) models, providing new models for new phones. A number of AI enhancements have been added, such as reading web pages to users in different languages ​​and "more natural" voices, and virtual assistants speaking more naturally.

Google pointed out that Pixel 8 Pro is the first phone to run Google’s basic large model directly on the device, which requires 150 times the calculation of the largest ML model on Pixel 7.

At the same time, Google announced the launch of "Assistant with Bard" for Android and iOS devices, which combines the mobile phone's personal assistant function with generative AI. Users can use text, voice or images to Interact with the Bard assistant - in other words, it's multi-modal.

When a user asks "What important emails have I missed this week?", Bard Assistant will provide the following services: First, it will list the key points and specific content of each important email and provide links to the corresponding emails. Secondly, it can also help users extract active addresses and display them in Google Maps

Google releases multi-modal Bard assistant: another milestone towards the era of interactive AI

If the user wants to post a photo of a puppy to social media, he only needs to summon the Bard Assistant floating dialog box and ask him to write the posting content. The Bard assistant will recognize the image and write the corresponding content.

Google releases multi-modal Bard assistant: another milestone towards the era of interactive AI

Google said it will soon roll out Bard Assistant to early testers to get feedback and launch it to the public in the coming months.

In addition, DeepMind co-founder Mustafa Suleyman said in a recent interview that

the current stage of generative AI is only a transitional technical stage, and will next enter the era of interactive AI, AI will be based on user For different task needs, arrange for other software and or contact real people to complete the work.

He believes that the first wave of artificial intelligence mainly focused on classification - deep learning shows that humans can train artificial intelligence to classify input data such as images, videos, audios, and languages. Humanity is currently in the second wave of "generative artificial intelligence", which is "enter data and generate new data." The third wave in the future will belong to "interactive artificial intelligence". "Conversation is the interactive interface of the future." Users not only click buttons and type text, but directly talk to artificial intelligence. By then, interactive artificial intelligence will Able to take action independently

Tianfeng Securities pointed out that

The importance of scenarios in the C-end AI application landing stage is highlighted. Chat robots, AI companions and content production tool scenarios are the first to be implemented. The development speed and commercialization progress of AI applications in these scenarios may exceed expected.

According to analysts’ predictions, the iteration of artificial intelligence and the catalytic effect of later events will continue to accelerate. In the second half of the year, the iteration speed of applications and models of overseas giant companies will be significantly improved, and the capabilities of general chatbots are expected to be further enhanced. This may lead to an improvement in user experience and further increase the number of users

In addition, Huajin Securities added that the shift of large models from general to vertical scenarios is more of an exploration of commercialization and is the driving force for large models to move from training to inference.

With the development and improvement of vertical large models, the application of large models is the key to opening up greater room for growth. Edge computing is a clear and huge incremental market. It has now reached the industry implementation stage. Cloud computing companies, telecom operators, equipment manufacturers, CDN companies, etc. are all actively promoting the implementation of the industry. The rewritten content is: Source: Financial Associated Press

The above is the detailed content of Google releases multi-modal Bard assistant: another milestone towards the era of interactive AI. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:sohu.com. If there is any infringement, please contact admin@php.cn delete