Here's how to teach ChatGPT how to read pictures
The "Wen Sheng Tu" model will be popular in 2022, so what will be popular in 2023?
The answer from machine learning engineer Daniel Bourke is: the other way around!
No, a newly released "pictures and text" model has exploded on the Internet, and its excellent effects have caused many netizens to repost and like it.
is not only a basic "look at pictures and speak" function, but also can write love poems, explain plots, design dialogues for objects in pictures, etc., this AI can do all Hold it firmly!
For example, when you find tempting food on the Internet, just send it the picture, and it will immediately identify the required ingredients and cooking steps:
Even some of Leeuwenhoek’s details in the picture can be “seen” clearly.
When asked how to get out of the upside-down house in the picture, AI's answer was: Isn't there a slide on the side?
This new AI is called BLIP-2 (Bootstrapping Language-Image Pre-training 2), and the code is currently open source.
The most important thing is that, unlike previous research, BLIP-2 uses a universal pre-training framework, so it can be connected to your own language model arbitrarily.
Some netizens are already imagining the powerful combination after changing the interface to ChatGPT.
Steven Hoi, one of the authors, even said: BLIP-2 will be the "multi-modal version of ChatGPT" in the future.
So, what other magical places are there in BLIP-2? Look down together.
First-class understanding ability
The gameplay of BLIP-2 can be said to be very diverse.
You only need to provide a picture, and you can talk to it, and it can meet various requirements such as telling stories, reasoning, and generating personalized text.
For example, BLIP-2 can not only easily identify the scenic spot in the picture as the Great Wall, but also introduce the history of the Great Wall:
The Great Wall of China was built by Qin Shihuang in 221 BC to protect the imperial capital. Built to protect against invasion from the north.
Give it a movie still, BLIP-2 not only knows where it comes from, but also knows the ending of the story: the sinking of the Titanic, male The Lord drowned.
BLIP-2 also grasps the human expression very accurately.
When asked what the man's expression in this picture was and why he was like this, BLIP-2's answer was: he was afraid of the chicken because it was flying towards him.
What’s even more amazing is that BLIP-2 also performs very well on many open questions.
Let it write a romantic sentence based on the picture below:
Its answer is this: Love is like a sunset, It's hard to see it coming, but when it happens, it's so beautiful.
Not only does this person have perfect understanding, but he also has strong literary attainments!
Let it generate a dialogue for the two animals in the picture. BLIP-2 can also easily handle the arrogant cat x silly cute dog Settings:
Cat: Hey, dog, can I ride on your back?
Dog: Of course, why not?
Cat: I'm tired of walking in the snow.
So, how does BLIP-2 achieve such a powerful understanding ability?
Implementing new SOTA on multiple visual language tasks
Considering that the end-to-end training cost of large-scale models is getting higher and higher, BLIP-2 uses a general and efficient pre-training method. Training strategy:
Bootstrap visual language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models.
This also means that everyone can choose the model they want to use.
In order to bridge the gap between modes, the researcher proposed a lightweight query Transformer.
The Transformer is pre-trained in two stages:
The first stage guides visual language representation learning from the frozen image encoder, and the second stage guides vision from the frozen language model to language generation study.
In order to test the performance of BLIP-2, the researchers started from zero-sample image-text generation, visual question answering, image-text retrieval, and image subtitles respectively. It was evaluated on the task.
The final results show that BLIP-2 achieved SOTA on multiple visual language tasks.
Among them, BLIP-2 is 8.7% higher than Flamingo 80B on zero-shot VQAv2, and the training parameters are reduced by 54 times.
And it is obvious that a stronger image encoder or a stronger language model will produce better performance.
It is worth mentioning that the researcher also mentioned at the end of the paper that BLIP-2 still has a shortcoming, that is, the lack of context learning ability :
Each sample contains only one image-text pair, and it is currently impossible to learn the correlation between multiple image-text pairs in a single sequence.
Research Team
The research team of BLIP-2 comes from Salesforce Research.
The first author is Junnan Li, who is also the author of BLIP, which was launched a year ago.
is currently a senior research scientist at Salesforce Asia Research Institute. Graduated from the University of Hong Kong with a bachelor's degree and a Ph.D. from the National University of Singapore.
The research field is very wide, including self-supervised learning, semi-supervised learning, weakly supervised learning, and visual-language.
The following is the paper link and GitHub link of BLIP-2. Interested friends can pick it up~
Paper link: https://arxiv.org/pdf/2301.12597. pdf
GitHub link: https://github.com/salesforce/LAVIS/tree/main/projects/blip2
Reference link: [1]https://twitter.com/mrdbourke /status/1620353263651688448
[2]https://twitter.com/LiJunnan0409/status/1620259379223343107
The above is the detailed content of Here's how to teach ChatGPT how to read pictures. For more information, please follow other related articles on the PHP Chinese website!

The legal tech revolution is gaining momentum, pushing legal professionals to actively embrace AI solutions. Passive resistance is no longer a viable option for those aiming to stay competitive. Why is Technology Adoption Crucial? Legal professional

Many assume interactions with AI are anonymous, a stark contrast to human communication. However, AI actively profiles users during every chat. Every prompt, every word, is analyzed and categorized. Let's explore this critical aspect of the AI revo

A successful artificial intelligence strategy cannot be separated from strong corporate culture support. As Peter Drucker said, business operations depend on people, and so does the success of artificial intelligence. For organizations that actively embrace artificial intelligence, building a corporate culture that adapts to AI is crucial, and it even determines the success or failure of AI strategies. West Monroe recently released a practical guide to building a thriving AI-friendly corporate culture, and here are some key points: 1. Clarify the success model of AI: First of all, we must have a clear vision of how AI can empower business. An ideal AI operation culture can achieve a natural integration of work processes between humans and AI systems. AI is good at certain tasks, while humans are good at creativity and judgment

Meta upgrades AI assistant application, and the era of wearable AI is coming! The app, designed to compete with ChatGPT, offers standard AI features such as text, voice interaction, image generation and web search, but has now added geolocation capabilities for the first time. This means that Meta AI knows where you are and what you are viewing when answering your question. It uses your interests, location, profile and activity information to provide the latest situational information that was not possible before. The app also supports real-time translation, which completely changed the AI experience on Ray-Ban glasses and greatly improved its usefulness. The imposition of tariffs on foreign films is a naked exercise of power over the media and culture. If implemented, this will accelerate toward AI and virtual production

Artificial intelligence is revolutionizing the field of cybercrime, which forces us to learn new defensive skills. Cyber criminals are increasingly using powerful artificial intelligence technologies such as deep forgery and intelligent cyberattacks to fraud and destruction at an unprecedented scale. It is reported that 87% of global businesses have been targeted for AI cybercrime over the past year. So, how can we avoid becoming victims of this wave of smart crimes? Let’s explore how to identify risks and take protective measures at the individual and organizational level. How cybercriminals use artificial intelligence As technology advances, criminals are constantly looking for new ways to attack individuals, businesses and governments. The widespread use of artificial intelligence may be the latest aspect, but its potential harm is unprecedented. In particular, artificial intelligence

The intricate relationship between artificial intelligence (AI) and human intelligence (NI) is best understood as a feedback loop. Humans create AI, training it on data generated by human activity to enhance or replicate human capabilities. This AI

Anthropic's recent statement, highlighting the lack of understanding surrounding cutting-edge AI models, has sparked a heated debate among experts. Is this opacity a genuine technological crisis, or simply a temporary hurdle on the path to more soph

India is a diverse country with a rich tapestry of languages, making seamless communication across regions a persistent challenge. However, Sarvam’s Bulbul-V2 is helping to bridge this gap with its advanced text-to-speech (TTS) t


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

SublimeText3 Chinese version
Chinese version, very easy to use

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

WebStorm Mac version
Useful JavaScript development tools

Atom editor mac version download
The most popular open source editor
