


MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.
A few months ago, several researchers from KAUST (King Abdullah University of Science and Technology, Saudi Arabia) proposed a method called MiniGPT- 4 project, which can provide image understanding and dialogue capabilities similar to GPT-4.
For example, MiniGPT-4 can answer the scene in the picture below: "The picture describes a cactus growing on a frozen lake. There are huge ice crystals around the cactus, and there are ice crystals in the distance. Snow-capped peaks..." If you then ask, can this kind of scene happen in the real world? The answer given by MiniGPT-4 is that this image is not common in the real world and the reason why.
Just a few months have passed. Recently, the KAUST team and researchers from Meta announced that they have upgraded MiniGPT-4 to MiniGPT-v2 version.
Paper address: https://arxiv.org/pdf/2310.09478.pdf
Paper Home page: https://minigpt-v2.github.io/
#Demo: https://minigpt-v2.github.io/
Specifically, MiniGPT-v2 can serve as a unified interface to better handle various visual-linguistic tasks. At the same time, this article recommends using unique identification symbols for different tasks when training the model. These identification symbols help the model easily distinguish each task instruction and improve the learning efficiency of each task model.
To evaluate the performance of the MiniGPT-v2 model, the researchers conducted extensive experiments on different visual-language tasks. Results show that MiniGPT-v2 achieves SOTA or comparable performance on various benchmarks compared to previous vision-language general-purpose models such as MiniGPT-4, InstructBLIP, LLaVA, and Shikra. For example, MiniGPT-v2 outperforms MiniGPT-4 by 21.3%, InstructBLIP by 11.3%, and LLaVA by 11.7% on the VSR benchmark.
Below we use specific examples to illustrate the role of MiniGPT-v2 identification symbols.
For example, by adding the [grounding] recognition symbol, the model can easily generate an image description with spatial position awareness:
By adding the [detection] recognition symbol, the model can directly extract the objects in the input text and find their spatial positions in the picture:
Frame an object in the picture. By adding [identify], the model can directly identify the name of the object:
By adding [refer] With the description of an object, the model can directly help you find the corresponding spatial position of the object:
You can also identify the match without adding any tasks, and Pictures for dialogue:
The model’s spatial awareness has also become stronger, and you can directly ask the model who appears on the left, middle and right of the picture:
Method introduction
The MiniGPT-v2 model architecture is shown in the figure below. It consists of three parts Composition: visual backbone, linear projection layer and large language model.
Visual backbone: MiniGPT-v2 uses EVA as the backbone model, and the visual backbone is frozen during training. The model is trained on image resolution of 448x448 and positional encoding is inserted to scale to higher image resolutions.
Linear projection layer: This article aims to project all visual tokens from the frozen visual backbone into the language model space. However, for higher resolution images (e.g. 448x448), projecting all image tokens results in very long sequence inputs (e.g. 1024 tokens), significantly reducing training and inference efficiency. Therefore, this paper simply concatenates 4 adjacent visual tokens in the embedding space and projects them together into a single embedding in the same feature space of a large language model, thereby reducing the number of visual input tokens by a factor of 4.
Large-scale language model: MiniGPT-v2 uses the open source LLaMA2-chat (7B) as the backbone of the language model. In this research, the language model is considered as a unified interface for various visual language inputs. This article directly uses LLaMA-2 language tokens to perform various visual language tasks. For basic vision tasks that require generating spatial locations, this paper directly requires the language model to generate textual representations of bounding boxes to represent their spatial locations.
Multi-task instruction training
##This article uses task recognition symbolic instructions to train the model, Divided into three stages. The data sets used in each stage of training are shown in Table 2.
Phase 1: Pre-training. This paper gives a high sampling rate to weakly labeled datasets to obtain more diverse knowledge.
Phase 2: Multi-task training. In order to improve the performance of MiniGPT-v2 on each task, the current stage only focuses on using fine-grained datasets to train the model. The researchers excluded weakly supervised datasets such as GRIT-20M and LAION from stage-1 and updated the data sampling ratio according to the frequency of each task. This strategy enables our model to prioritize high-quality aligned image-text data, resulting in superior performance across a variety of tasks.
Phase 3: Multi-modal instruction tuning. Subsequently, this paper focuses on using more multimodal instruction datasets to fine-tune the model and enhance its conversational capabilities as a chatbot.
Finally, the official also provides a Demo for readers to test. For example, on the left side of the picture below, we upload a photo, then select [Detection], then enter "red balloon", and the model will Can identify the red balloon in the picture:
Interested readers can check the paper homepage for more information.
The above is the detailed content of MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.. For more information, please follow other related articles on the PHP Chinese website!

Introduction In prompt engineering, “Graph of Thought” refers to a novel approach that uses graph theory to structure and guide AI’s reasoning process. Unlike traditional methods, which often involve linear s

Introduction Congratulations! You run a successful business. Through your web pages, social media campaigns, webinars, conferences, free resources, and other sources, you collect 5000 email IDs daily. The next obvious step is

Introduction In today’s fast-paced software development environment, ensuring optimal application performance is crucial. Monitoring real-time metrics such as response times, error rates, and resource utilization can help main

“How many users do you have?” he prodded. “I think the last time we said was 500 million weekly actives, and it is growing very rapidly,” replied Altman. “You told me that it like doubled in just a few weeks,” Anderson continued. “I said that priv

Introduction Mistral has released its very first multimodal model, namely the Pixtral-12B-2409. This model is built upon Mistral’s 12 Billion parameter, Nemo 12B. What sets this model apart? It can now take both images and tex

Imagine having an AI-powered assistant that not only responds to your queries but also autonomously gathers information, executes tasks, and even handles multiple types of data—text, images, and code. Sounds futuristic? In this a

Introduction The finance industry is the cornerstone of any country’s development, as it drives economic growth by facilitating efficient transactions and credit availability. The ease with which transactions occur and credit

Introduction Data is being generated at an unprecedented rate from sources such as social media, financial transactions, and e-commerce platforms. Handling this continuous stream of information is a challenge, but it offers an


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

WebStorm Mac version
Useful JavaScript development tools