Home  >  Article  >  Technology peripherals  >  The most comprehensive review of multimodal large models is here! 7 Microsoft researchers cooperated vigorously, 5 major themes, 119 pages of document

The most comprehensive review of multimodal large models is here! 7 Microsoft researchers cooperated vigorously, 5 major themes, 119 pages of document

王林
王林forward
2023-09-25 16:49:06676browse

Multi-modal large modelThe most complete review is here!

Written by 7 Chinese researchers from Microsoft, is 119 pages long——

The most comprehensive review of multimodal large models is here! 7 Microsoft researchers cooperated vigorously, 5 major themes, 119 pages of document

It starts fromStarting from the research directions of two types of multi-modal large models, and , which have been improved so far, are still at the forefront, five specific research themes are comprehensively summarized:

    Visual understanding
  • Visual generation
  • Unified visual model
  • LLM-supported multi-modal large model
  • Multi-modal agent

The most comprehensive review of multimodal large models is here! 7 Microsoft researchers cooperated vigorously, 5 major themes, 119 pages of document

And focus on one phenomenon:

The multi-modal basic model has moved from special to

universal.

Ps. This is why the author directly drew an image of

Doraemon at the beginning of the paper.

Who is suitable to read this review

(report)?

In the original words of Microsoft:

As long as you are interested in learning the basic knowledge and latest progress of multi-modal basic models, whether you are a professional researcher or a school student, this content is for you Very suitable for you

Let’s take a look~

One article to find out the current situation of multi-modal large models

The first two of these five specific topics are currently mature. fields, and the last three belong to the cutting-edge fields

1. Visual understanding

The core issue in this part is how to pre-train a powerful image understanding backbone.

As shown in the figure below, according to the different supervision signals used to train the model, we can divide the methods into three categories:

Label supervision, language supervision
(represented by CLIP) and image-only self-supervision.

The last one indicates that the supervision signal is mined from the image itself. Popular methods include contrastive learning, non-contrast learning and masked image modeling.

In addition to these methods, the article further discusses pre-training methods for categories such as multi-modal fusion, region-level and pixel-level image understanding

The most comprehensive review of multimodal large models is here! 7 Microsoft researchers cooperated vigorously, 5 major themes, 119 pages of document

Representative works of each of the above methods are also listed.

The most comprehensive review of multimodal large models is here! 7 Microsoft researchers cooperated vigorously, 5 major themes, 119 pages of document

2. Visual generation

This topic is the core of AIGC. It is not limited to image generation, but also includes videos, 3D point clouds, etc.

And its usefulness is not limited to art, design and other fields - it is also very helpful in synthesizing training data, directly helping us achieve a closed loop of multi-modal content understanding and generation.

In this part, the author focuses on the importance and methods of generating effects that are strictly consistent with human intentions

(focusing on image generation).

Specifically, it starts from four aspects: spatial controllable generation, text-based re-editing, better following text prompts and generation concept customization

(concept customization) .

The most comprehensive review of multimodal large models is here! 7 Microsoft researchers cooperated vigorously, 5 major themes, 119 pages of document

At the end of this section, the authors also share their views on current research trends and upcoming research directions

In order to better follow human In order to make the above four directions more flexible and replaceable, we need to develop a general text generation model

The respective representative works of the four directions are listed as follows:

The most comprehensive review of multimodal large models is here! 7 Microsoft researchers cooperated vigorously, 5 major themes, 119 pages of document

3. Unified vision model

This part discusses the challenges faced in building a unified vision model:

The content that needs to be rewritten is: first, the input types are different;

What needs to be rewritten is: Secondly, different tasks require different fine-grainedness, and the output also requires different formats;

Data also faces challenges, in addition to modeling

For example, the cost of different types of label annotations varies greatly, and the collection cost is much higher than that of text data, which results in the scale of visual data being usually much smaller than that of text corpora.

However, despite the many challenges, the author points out:

The CV field is increasingly interested in developing universal and unified vision systems, and three trends have emerged:

The most comprehensive review of multimodal large models is here! 7 Microsoft researchers cooperated vigorously, 5 major themes, 119 pages of document

The first is from the closed set (closed-set) to the open set (open-set) , which can better combine text and Visually match.

The most important reason for the transition from specific tasks to general capabilities is that the cost of developing a new model for each new task is too high

The third is from a static model to a promptable model, LLM can Take different languages ​​and contextual cues as input and produce the output the user wants without fine-tuning. The general vision model we want to build should have the same contextual learning capabilities.

4. Multimodal large models supported by LLM

This section comprehensively discusses multimodal large models.

First, we will conduct an in-depth study of the background and representative examples, discuss OpenAI’s multi-modal research progress, and identify existing research gaps in this field.

Next, the author examines in detail the importance of instruction fine-tuning in large language models.

Then, the author discusses the fine-tuning of instructions in multi-modal large models, including principles, significance and applications.

Finally, we will also cover some advanced topics in the field of multimodal models for a deeper understanding, including:

More modalities beyond vision and language, multimodality State-of-the-art context learning, efficient parameter training, and Benchmark.

5. Multimodal agent

The so-called multimodal agent is a method that connects different multimodal experts with LLM to solve complex multimodal understanding problems.

In this part, the author mainly takes you to review the transformation of this model and summarizes the fundamental differences between this method and the traditional method.

Taking MM-REACT as an example, we will introduce in detail how this method works

We further summarize a comprehensive approach on how to build a multimodal agent, and its role in multimodal Emerging abilities in understanding. At the same time, we also cover how to easily extend this capability, including the latest and greatest LLM and potentially millions of tools

And of course, there are also some advanced topics discussed at the end, including how to improve/ Evaluate multi-modal agents, various applications built with them, etc.

The most comprehensive review of multimodal large models is here! 7 Microsoft researchers cooperated vigorously, 5 major themes, 119 pages of document

Introduction to the author

There are 7 authors in this report

The initiator and overall person in charge is Chunyuan Li .

He is a principal researcher at Microsoft Redmond and holds a Ph.D. from Duke University. His recent research interests include large-scale pre-training in CV and NLP.

He was responsible for the opening introduction, closing summary, and the writing of the chapter "Multimodal large models trained using LLM". Rewritten content: He was responsible for writing the beginning and end of the article, as well as the chapter on "Multimodal large models trained using LLM"

The most comprehensive review of multimodal large models is here! 7 Microsoft researchers cooperated vigorously, 5 major themes, 119 pages of document

There are 4 core authors:

  • Zhe Gan

Currently, he has joined Apple AI/ML and is responsible for major Scale vision and multimodal base model research. Previously, he was the principal researcher of Microsoft Azure AI. He holds a bachelor's and master's degree from Peking University and a Ph.D. from Duke University.

  • Zhengyuan Yang

He is a senior researcher at Microsoft. He graduated from the University of Rochester and received the ACM SIGMM Outstanding Doctoral Award and other honors. He studied as an undergraduate at the University of Science and Technology of China

  • Jianwei Yang

Chief researcher of the deep learning group at Microsoft Research Redmond. PhD from Georgia Institute of Technology.

  • Linjie Li(female)

Researcher in the Microsoft Cloud & AI Computer Vision Group, graduated with a master's degree from Purdue University.

They were respectively responsible for writing the remaining four thematic chapters.

Summary address: https://arxiv.org/abs/2309.10020

The above is the detailed content of The most comprehensive review of multimodal large models is here! 7 Microsoft researchers cooperated vigorously, 5 major themes, 119 pages of document. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete