Home  >  Article  >  Technology peripherals  >  MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.

MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.

PHPz
PHPzforward
2023-10-17 14:41:091327browse

A few months ago, several researchers from KAUST (King Abdullah University of Science and Technology, Saudi Arabia) proposed a method called MiniGPT- 4 project, which can provide image understanding and dialogue capabilities similar to GPT-4.

For example, MiniGPT-4 can answer the scene in the picture below: "The picture describes a cactus growing on a frozen lake. There are huge ice crystals around the cactus, and there are ice crystals in the distance. Snow-capped peaks..." If you then ask, can this kind of scene happen in the real world? The answer given by MiniGPT-4 is that this image is not common in the real world and the reason why.

MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.

Just a few months have passed. Recently, the KAUST team and researchers from Meta announced that they have upgraded MiniGPT-4 to MiniGPT-v2 version.

MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.

Paper address: https://arxiv.org/pdf/2310.09478.pdf

Paper Home page: https://minigpt-v2.github.io/

#Demo: https://minigpt-v2.github.io/

Specifically, MiniGPT-v2 can serve as a unified interface to better handle various visual-linguistic tasks. At the same time, this article recommends using unique identification symbols for different tasks when training the model. These identification symbols help the model easily distinguish each task instruction and improve the learning efficiency of each task model.

To evaluate the performance of the MiniGPT-v2 model, the researchers conducted extensive experiments on different visual-language tasks. Results show that MiniGPT-v2 achieves SOTA or comparable performance on various benchmarks compared to previous vision-language general-purpose models such as MiniGPT-4, InstructBLIP, LLaVA, and Shikra. For example, MiniGPT-v2 outperforms MiniGPT-4 by 21.3%, InstructBLIP by 11.3%, and LLaVA by 11.7% on the VSR benchmark.

MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.

Below we use specific examples to illustrate the role of MiniGPT-v2 identification symbols.

For example, by adding the [grounding] recognition symbol, the model can easily generate an image description with spatial position awareness:

MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.

By adding the [detection] recognition symbol, the model can directly extract the objects in the input text and find their spatial positions in the picture:

MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.

Frame an object in the picture. By adding [identify], the model can directly identify the name of the object:

MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.By adding [refer] With the description of an object, the model can directly help you find the corresponding spatial position of the object:

MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.

You can also identify the match without adding any tasks, and Pictures for dialogue:

MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.

The model’s spatial awareness has also become stronger, and you can directly ask the model who appears on the left, middle and right of the picture:

MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.

Method introduction

The MiniGPT-v2 model architecture is shown in the figure below. It consists of three parts Composition: visual backbone, linear projection layer and large language model.

MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.

Visual backbone: MiniGPT-v2 uses EVA as the backbone model, and the visual backbone is frozen during training. The model is trained on image resolution of 448x448 and positional encoding is inserted to scale to higher image resolutions.

Linear projection layer: This article aims to project all visual tokens from the frozen visual backbone into the language model space. However, for higher resolution images (e.g. 448x448), projecting all image tokens results in very long sequence inputs (e.g. 1024 tokens), significantly reducing training and inference efficiency. Therefore, this paper simply concatenates 4 adjacent visual tokens in the embedding space and projects them together into a single embedding in the same feature space of a large language model, thereby reducing the number of visual input tokens by a factor of 4.

Large-scale language model: MiniGPT-v2 uses the open source LLaMA2-chat (7B) as the backbone of the language model. In this research, the language model is considered as a unified interface for various visual language inputs. This article directly uses LLaMA-2 language tokens to perform various visual language tasks. For basic vision tasks that require generating spatial locations, this paper directly requires the language model to generate textual representations of bounding boxes to represent their spatial locations.

Multi-task instruction training

##This article uses task recognition symbolic instructions to train the model, Divided into three stages. The data sets used in each stage of training are shown in Table 2.

MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.

Phase 1: Pre-training. This paper gives a high sampling rate to weakly labeled datasets to obtain more diverse knowledge.

Phase 2: Multi-task training. In order to improve the performance of MiniGPT-v2 on each task, the current stage only focuses on using fine-grained datasets to train the model. The researchers excluded weakly supervised datasets such as GRIT-20M and LAION from stage-1 and updated the data sampling ratio according to the frequency of each task. This strategy enables our model to prioritize high-quality aligned image-text data, resulting in superior performance across a variety of tasks.

Phase 3: Multi-modal instruction tuning. Subsequently, this paper focuses on using more multimodal instruction datasets to fine-tune the model and enhance its conversational capabilities as a chatbot.

Finally, the official also provides a Demo for readers to test. For example, on the left side of the picture below, we upload a photo, then select [Detection], then enter "red balloon", and the model will Can identify the red balloon in the picture:

MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.

Interested readers can check the paper homepage for more information.

The above is the detailed content of MiniGPT-4 has been upgraded to MiniGPT-v2. Multi-modal tasks can still be completed without GPT-4.. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete