Home  >  Article  >  Technology peripherals  >  Breaking down the integration innovation of NLP and CV: taking stock of multi-modal deep learning in recent years

Breaking down the integration innovation of NLP and CV: taking stock of multi-modal deep learning in recent years

WBOY
WBOYforward
2023-04-11 16:25:031160browse

In recent years, the fields of NLP and CV have made continuous breakthroughs in methods. Not only have single-modal models made progress, but large-scale multi-modal methods have also become a very popular research area.

Breaking down the integration innovation of NLP and CV: taking stock of multi-modal deep learning in recent years

  • Paper address: https://arxiv.org/pdf/2301.04856v1.pdf
  • Project address: https://github.com/slds-lmu/seminar_multimodal_dl

In a recent paper, researcher Matthias Aßenmacher reviewed the most advanced research methods in the two subfields of deep learning and tried to give a comprehensive overview. In addition, modeling frameworks for converting one modality into another are discussed (Chapters 3.1 and 3.2), as well as representation learning models that exploit one modality to enhance another (Chapter 3.3 and Chapter 3.4). The researchers conclude the second part by introducing an architecture focused on processing both modalities simultaneously (Chapter 3.5). Finally, the paper also covers other modalities (Chapter 4.1 and 4.2) as well as general multimodal models (Chapter 4.3) that are able to handle different tasks on different modalities in a unified architecture. An interesting application (“Generative Art”, Chapter 4.4) ends up being the icing on the cake of this review.

The table of contents of the thesis chapters is as follows:

Breaking down the integration innovation of NLP and CV: taking stock of multi-modal deep learning in recent years

##Multimodality Introduction to Deep Learning

Humans have five basic senses: hearing, touch, smell, taste and vision. Through these five modes, we perceive and understand the world around us. "Multimodality" means using a combination of multiple information channels at the same time to understand the surrounding environment. For example, when toddlers learn the word "cat," they say the word out loud in different ways, pointing to the cat and making sounds like "meow." AI researchers use the human learning process as a paradigm and combine different modalities to train deep learning models.

On the surface, deep learning algorithms optimize a defined objective function by training a neural network to optimize a loss function. Optimization, i.e. minimizing the loss, is accomplished through a numerical optimization procedure called gradient descent. Therefore, deep learning models can only process numerical inputs and can only produce numerical outputs. However, in multimodal tasks, we often encounter unstructured data such as images or text. Therefore, the first question about multimodal tasks is how to numerically represent the input; the second is how to appropriately combine different modalities.

For example, training a deep learning model to generate a picture of a cat might be a typical task. First, the computer needs to understand the text input "cat" and then somehow convert that information into a specific image. Therefore, it is necessary to determine the contextual relationship between words in the input text and the spatial relationship between pixels in the output image. What might be easy for a young child can be a huge challenge for a computer. Both must have a certain understanding of the word "cat", including the connotation and appearance of the animal.

A common approach in the current field of deep learning is to generate embeddings that numerically represent cats as vectors in some latent space. To achieve this, various methods and algorithm architectures have been developed in recent years. This article provides an overview of various methods used in state-of-the-art (SOTA) multimodal deep learning to overcome the challenges posed by unstructured data and combinations of different modal inputs.

Chapter Introduction

Because multimodal models usually use text and images as input or output, Chapter 2 focuses on natural language processing (NLP) and Computer Vision (CV) Methods. Methods in the field of NLP mainly focus on text data processing, while CV mostly deals with image processing.

A very important concept about NLP (Section 2.1) is called word embedding, which is an important part of almost all multi-modal deep learning architectures now. This concept also laid the foundation for Transformer-based models such as BERT, which has achieved significant progress in several NLP tasks. In particular, Transformer's self-attention mechanism has completely changed the NLP model, which is why most NLP models use Transformer as the core.

In computer vision (Section 2.2), the author introduces different network architectures, namely ResNet, EfficientNet, SimCLR and BYOL. In both areas, it is of great interest to compare different approaches and how they perform on challenging benchmarks. Therefore, subsection 2.3 at the end of Chapter 2 provides a comprehensive overview of different datasets, pre-training tasks and benchmarks for CV and NLP.

Chapter 3 focuses on different multi-modal architectures, covering various combinations of text and images. The proposed models combine and advance research on different methods of NLP and CV. We first introduce the Img2Text task (section 3.1), the Microsoft COCO dataset for object recognition, and the Meshed-Memory Transformer for image capture.

In addition, the researchers developed a method to generate images based on short text prompts (Section 3.2). The first models to accomplish this task were Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). In recent years, these methods have been continuously improved, and today's SOTA Transformer architecture and text-guided diffusion models such as DALL-E and GLIDE have achieved remarkable results. Another interesting question is how to leverage images to support language models (Section 3.3). This can be achieved via sequential embedding, more advanced actual embedding, or directly inside the Transformer.

Also take a look at text-enabled CV models such as CLIP, ALIGN, and Florence (section 3.4). The use of base models implies model reuse (e.g. CLIP in DALL-E 2), as well as a contrastive loss of text to image connections. Additionally, zero-shot makes classifying new and unseen data effortless with fine-tuning. In particular, CLIP, an open source architecture for image classification and generation, attracted much attention last year. Some other architectures for processing text and images simultaneously are introduced at the end of Chapter 3 (Section 3.5).

For example, Data2Sec uses the same learning method to process speech, vision and language, and tries to find a common way to handle different modalities in one architecture. Furthermore, VilBert extends the popular BERT architecture to handle image and text inputs by implementing joint attention. This approach is also used in Google’s Deepmind Flamingo. Furthermore, Flamingo aims to handle multiple tasks with a single visual language model through few-shot learning and freezing of pre-trained vision and language models.

The final chapter (Chapter 4) introduces methods that can handle modalities other than text and images, such as video, speech, or tabular data. The overall goal is to explore universal multimodal architectures that are not modal for the sake of modality, but to handle challenges with ease. Therefore, we also need to deal with the problem of multi-modal fusion and alignment, and decide whether to use joint or coordinated representations (section 4.1). Furthermore, the precise combination of structured and unstructured data will be described in more detail (section 4.2).

The author also proposes different integration strategies that have been developed in recent years, which this article illustrates through two use cases in survival analysis and economics. Beyond this, another interesting research question is how to handle different tasks in a so-called multipurpose model (section 4.3), like the one created by Google researchers in their “Pathway” model. Finally, the article will show a typical application of multi-modal deep learning in the art scene, using image generation models such as DALL-E to create works of art in the field of generative art (Section 4.4).

For more information, please refer to the original paper.

The above is the detailed content of Breaking down the integration innovation of NLP and CV: taking stock of multi-modal deep learning in recent years. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete