


Bytedance Doubao and Wuhan University proposed CAL: enhancing multi-modal alignment effects through visually related tokens

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com
- Paper link: https://arxiv.org/pdf/2405.17871
- Code link: https://github.com/foundation-multimodal-models/CAL
- can be directly nested into the training process without additional pre-training stage.
- has achieved significant improvements in OCR and Caption benchmarks. From the visualization, it can be found that CAL makes the image modal alignment better.
- CAL makes the training process more resistant to noisy data.
Text that is highly related to pictures: such as entities ( Such as people, animals, objects), quantity, color, text, etc. These tokens directly correspond to image information and are crucial for multi-modal alignment. Text with low correlation to the picture: Such as following words or content that can be inferred from the previous text. These tokens are actually mainly used to train the plain text capabilities of VLM. Text that contradicts the image content: These tokens are inconsistent with image information and may even provide misleading information, negatively affecting the multi-modal alignment process.
标 Figure 1: The green mark is related to the high -related Token, the red is the contrary to the content, and the colorless is the neutral Token
- If you add image input in front, it is equivalent to providing additional contextual information. In this case, the logit of each text token will be adjusted based on the new situation. The logit changes in these two cases represent the impact of the new condition of the picture on each text token.
- Specifically, during the training process, CAL inputs the image and text sequences and individual text sequences into the large language model (LLM) respectively to obtain the logit of each text token. By calculating the logit difference between these two cases, we can measure the impact of the image on each token. The larger the logit difference, the greater the impact of the image on the token, so the token is more relevant to the image. The figure below shows the flow chart of the logit diff and CAL methods for text tokens.对 Figure 2: The left picture is the visualization of the token logit diff in the two situations. The picture on the right is the visualization of the CAL method process


The above is the detailed content of Bytedance Doubao and Wuhan University proposed CAL: enhancing multi-modal alignment effects through visually related tokens. For more information, please follow other related articles on the PHP Chinese website!

Introduction In prompt engineering, “Graph of Thought” refers to a novel approach that uses graph theory to structure and guide AI’s reasoning process. Unlike traditional methods, which often involve linear s

Introduction Congratulations! You run a successful business. Through your web pages, social media campaigns, webinars, conferences, free resources, and other sources, you collect 5000 email IDs daily. The next obvious step is

Introduction In today’s fast-paced software development environment, ensuring optimal application performance is crucial. Monitoring real-time metrics such as response times, error rates, and resource utilization can help main

“How many users do you have?” he prodded. “I think the last time we said was 500 million weekly actives, and it is growing very rapidly,” replied Altman. “You told me that it like doubled in just a few weeks,” Anderson continued. “I said that priv

Introduction Mistral has released its very first multimodal model, namely the Pixtral-12B-2409. This model is built upon Mistral’s 12 Billion parameter, Nemo 12B. What sets this model apart? It can now take both images and tex

Imagine having an AI-powered assistant that not only responds to your queries but also autonomously gathers information, executes tasks, and even handles multiple types of data—text, images, and code. Sounds futuristic? In this a

Introduction The finance industry is the cornerstone of any country’s development, as it drives economic growth by facilitating efficient transactions and credit availability. The ease with which transactions occur and credit

Introduction Data is being generated at an unprecedented rate from sources such as social media, financial transactions, and e-commerce platforms. Handling this continuous stream of information is a challenge, but it offers an


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

SublimeText3 English version
Recommended: Win version, supports code prompts!

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

SublimeText3 Chinese version
Chinese version, very easy to use

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function