


Breaking down the integration innovation of NLP and CV: taking stock of multi-modal deep learning in recent years
In recent years, the fields of NLP and CV have made continuous breakthroughs in methods. Not only have single-modal models made progress, but large-scale multi-modal methods have also become a very popular research area.
- Paper address: https://arxiv.org/pdf/2301.04856v1.pdf
- Project address: https://github.com/slds-lmu/seminar_multimodal_dl
In a recent paper, researcher Matthias Aßenmacher reviewed the most advanced research methods in the two subfields of deep learning and tried to give a comprehensive overview. In addition, modeling frameworks for converting one modality into another are discussed (Chapters 3.1 and 3.2), as well as representation learning models that exploit one modality to enhance another (Chapter 3.3 and Chapter 3.4). The researchers conclude the second part by introducing an architecture focused on processing both modalities simultaneously (Chapter 3.5). Finally, the paper also covers other modalities (Chapter 4.1 and 4.2) as well as general multimodal models (Chapter 4.3) that are able to handle different tasks on different modalities in a unified architecture. An interesting application (“Generative Art”, Chapter 4.4) ends up being the icing on the cake of this review.
The table of contents of the thesis chapters is as follows:
Humans have five basic senses: hearing, touch, smell, taste and vision. Through these five modes, we perceive and understand the world around us. "Multimodality" means using a combination of multiple information channels at the same time to understand the surrounding environment. For example, when toddlers learn the word "cat," they say the word out loud in different ways, pointing to the cat and making sounds like "meow." AI researchers use the human learning process as a paradigm and combine different modalities to train deep learning models.
On the surface, deep learning algorithms optimize a defined objective function by training a neural network to optimize a loss function. Optimization, i.e. minimizing the loss, is accomplished through a numerical optimization procedure called gradient descent. Therefore, deep learning models can only process numerical inputs and can only produce numerical outputs. However, in multimodal tasks, we often encounter unstructured data such as images or text. Therefore, the first question about multimodal tasks is how to numerically represent the input; the second is how to appropriately combine different modalities.
For example, training a deep learning model to generate a picture of a cat might be a typical task. First, the computer needs to understand the text input "cat" and then somehow convert that information into a specific image. Therefore, it is necessary to determine the contextual relationship between words in the input text and the spatial relationship between pixels in the output image. What might be easy for a young child can be a huge challenge for a computer. Both must have a certain understanding of the word "cat", including the connotation and appearance of the animal.
A common approach in the current field of deep learning is to generate embeddings that numerically represent cats as vectors in some latent space. To achieve this, various methods and algorithm architectures have been developed in recent years. This article provides an overview of various methods used in state-of-the-art (SOTA) multimodal deep learning to overcome the challenges posed by unstructured data and combinations of different modal inputs.
Chapter IntroductionBecause multimodal models usually use text and images as input or output, Chapter 2 focuses on natural language processing (NLP) and Computer Vision (CV) Methods. Methods in the field of NLP mainly focus on text data processing, while CV mostly deals with image processing.
A very important concept about NLP (Section 2.1) is called word embedding, which is an important part of almost all multi-modal deep learning architectures now. This concept also laid the foundation for Transformer-based models such as BERT, which has achieved significant progress in several NLP tasks. In particular, Transformer's self-attention mechanism has completely changed the NLP model, which is why most NLP models use Transformer as the core.
In computer vision (Section 2.2), the author introduces different network architectures, namely ResNet, EfficientNet, SimCLR and BYOL. In both areas, it is of great interest to compare different approaches and how they perform on challenging benchmarks. Therefore, subsection 2.3 at the end of Chapter 2 provides a comprehensive overview of different datasets, pre-training tasks and benchmarks for CV and NLP.
Chapter 3 focuses on different multi-modal architectures, covering various combinations of text and images. The proposed models combine and advance research on different methods of NLP and CV. We first introduce the Img2Text task (section 3.1), the Microsoft COCO dataset for object recognition, and the Meshed-Memory Transformer for image capture.
In addition, the researchers developed a method to generate images based on short text prompts (Section 3.2). The first models to accomplish this task were Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). In recent years, these methods have been continuously improved, and today's SOTA Transformer architecture and text-guided diffusion models such as DALL-E and GLIDE have achieved remarkable results. Another interesting question is how to leverage images to support language models (Section 3.3). This can be achieved via sequential embedding, more advanced actual embedding, or directly inside the Transformer.
Also take a look at text-enabled CV models such as CLIP, ALIGN, and Florence (section 3.4). The use of base models implies model reuse (e.g. CLIP in DALL-E 2), as well as a contrastive loss of text to image connections. Additionally, zero-shot makes classifying new and unseen data effortless with fine-tuning. In particular, CLIP, an open source architecture for image classification and generation, attracted much attention last year. Some other architectures for processing text and images simultaneously are introduced at the end of Chapter 3 (Section 3.5).
For example, Data2Sec uses the same learning method to process speech, vision and language, and tries to find a common way to handle different modalities in one architecture. Furthermore, VilBert extends the popular BERT architecture to handle image and text inputs by implementing joint attention. This approach is also used in Google’s Deepmind Flamingo. Furthermore, Flamingo aims to handle multiple tasks with a single visual language model through few-shot learning and freezing of pre-trained vision and language models.
The final chapter (Chapter 4) introduces methods that can handle modalities other than text and images, such as video, speech, or tabular data. The overall goal is to explore universal multimodal architectures that are not modal for the sake of modality, but to handle challenges with ease. Therefore, we also need to deal with the problem of multi-modal fusion and alignment, and decide whether to use joint or coordinated representations (section 4.1). Furthermore, the precise combination of structured and unstructured data will be described in more detail (section 4.2).
The author also proposes different integration strategies that have been developed in recent years, which this article illustrates through two use cases in survival analysis and economics. Beyond this, another interesting research question is how to handle different tasks in a so-called multipurpose model (section 4.3), like the one created by Google researchers in their “Pathway” model. Finally, the article will show a typical application of multi-modal deep learning in the art scene, using image generation models such as DALL-E to create works of art in the field of generative art (Section 4.4).
For more information, please refer to the original paper.
The above is the detailed content of Breaking down the integration innovation of NLP and CV: taking stock of multi-modal deep learning in recent years. For more information, please follow other related articles on the PHP Chinese website!
![[Ghibli-style images with AI] Introducing how to create free images with ChatGPT and copyright](https://img.php.cn/upload/article/001/242/473/174707263295098.jpg?x-oss-process=image/resize,p_40)
The latest model GPT-4o released by OpenAI not only can generate text, but also has image generation functions, which has attracted widespread attention. The most eye-catching feature is the generation of "Ghibli-style illustrations". Simply upload the photo to ChatGPT and give simple instructions to generate a dreamy image like a work in Studio Ghibli. This article will explain in detail the actual operation process, the effect experience, as well as the errors and copyright issues that need to be paid attention to. For details of the latest model "o3" released by OpenAI, please click here⬇️ Detailed explanation of OpenAI o3 (ChatGPT o3): Features, pricing system and o4-mini introduction Please click here for the English version of Ghibli-style article⬇️ Create Ji with ChatGPT

As a new communication method, the use and introduction of ChatGPT in local governments is attracting attention. While this trend is progressing in a wide range of areas, some local governments have declined to use ChatGPT. In this article, we will introduce examples of ChatGPT implementation in local governments. We will explore how we are achieving quality and efficiency improvements in local government services through a variety of reform examples, including supporting document creation and dialogue with citizens. Not only local government officials who aim to reduce staff workload and improve convenience for citizens, but also all interested in advanced use cases.

Have you heard of a framework called the "Fukatsu Prompt System"? Language models such as ChatGPT are extremely excellent, but appropriate prompts are essential to maximize their potential. Fukatsu prompts are one of the most popular prompt techniques designed to improve output accuracy. This article explains the principles and characteristics of Fukatsu-style prompts, including specific usage methods and examples. Furthermore, we have introduced other well-known prompt templates and useful techniques for prompt design, so based on these, we will introduce C.

ChatGPT Search: Get the latest information efficiently with an innovative AI search engine! In this article, we will thoroughly explain the new ChatGPT feature "ChatGPT Search," provided by OpenAI. Let's take a closer look at the features, usage, and how this tool can help you improve your information collection efficiency with reliable answers based on real-time web information and intuitive ease of use. ChatGPT Search provides a conversational interactive search experience that answers user questions in a comfortable, hidden environment that hides advertisements

In a modern society with information explosion, it is not easy to create compelling articles. How to use creativity to write articles that attract readers within a limited time and energy requires superb skills and rich experience. At this time, as a revolutionary writing aid, ChatGPT attracted much attention. ChatGPT uses huge data to train language generation models to generate natural, smooth and refined articles. This article will introduce how to effectively use ChatGPT and efficiently create high-quality articles. We will gradually explain the writing process of using ChatGPT, and combine specific cases to elaborate on its advantages and disadvantages, applicable scenarios, and safe use precautions. ChatGPT will be a writer to overcome various obstacles,

An efficient guide to creating charts using AI Visual materials are essential to effectively conveying information, but creating it takes a lot of time and effort. However, the chart creation process is changing dramatically due to the rise of AI technologies such as ChatGPT and DALL-E 3. This article provides detailed explanations on efficient and attractive diagram creation methods using these cutting-edge tools. It covers everything from ideas to completion, and includes a wealth of information useful for creating diagrams, from specific steps, tips, plugins and APIs that can be used, and how to use the image generation AI "DALL-E 3."

Unlock ChatGPT Plus: Fees, Payment Methods and Upgrade Guide ChatGPT, a world-renowned generative AI, has been widely used in daily life and business fields. Although ChatGPT is basically free, the paid version of ChatGPT Plus provides a variety of value-added services, such as plug-ins, image recognition, etc., which significantly improves work efficiency. This article will explain in detail the charging standards, payment methods and upgrade processes of ChatGPT Plus. For details of OpenAI's latest image generation technology "GPT-4o image generation" please click: Detailed explanation of GPT-4o image generation: usage methods, prompt word examples, commercial applications and differences from other AIs Table of contents ChatGPT Plus Fees Ch

How to use ChatGPT to streamline your design work and increase creativity This article will explain in detail how to create a design using ChatGPT. We will introduce examples of using ChatGPT in various design fields, such as ideas, text generation, and web design. We will also introduce points that will help you improve the efficiency and quality of a variety of creative work, such as graphic design, illustration, and logo design. Please take a look at how AI can greatly expand your design possibilities. table of contents ChatGPT: A powerful tool for design creation


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Dreamweaver CS6
Visual web development tools

WebStorm Mac version
Useful JavaScript development tools

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),
