search
HomeTechnology peripheralsAIApple uses autoregressive language models to pre-train image models

1. Background

After the emergence of large models such as GPT, the Transformer autoregressive modeling method of language model, which is the pre-training task of predicting next token, has achieved great success. So, can this autoregressive modeling method achieve better results in visual models? The article introduced today is Apple’s recent article on training a visual model based on Transformer autoregressive pre-training. Let me introduce this work to you.

Apple uses autoregressive language models to pre-train image modelsPicture

Paper title: Scalable Pre-training of Large Autoregressive Image Models

Download address: https://arxiv.org /pdf/2401.08541v1.pdf

Open source code: https://github.com/apple/ml-aim

2. Model structure

The model structure is based on Transformer. And adopt next token prediction in the language model as the optimization goal. The main modifications are in three aspects. First of all, unlike ViT, this article uses GPT's one-way attention, that is, the element at each position only calculates attention with the previous element. Secondly, we introduce more contextual information to improve the model's language understanding capabilities. Finally, we optimized the model's parameter settings to further improve performance. With these improvements, our model achieves significant performance improvements on language tasks.

Apple uses autoregressive language models to pre-train image modelsPicture

In the Transformer model, a new mechanism is introduced, that is, multiple prefix tokens are added in front of the input sequence. These tokens use a two-way attention mechanism. The main purpose of this change is to enhance consistency between pre-training and downstream applications. In downstream tasks, bidirectional attention methods similar to ViT are widely used. By introducing prefix bidirectional attention in the pre-training process, the model can better adapt to the needs of various downstream tasks. Such improvements can improve the performance and generalization capabilities of the model.

Apple uses autoregressive language models to pre-train image modelsPicture

In terms of optimizing the final output MLP layer of the model, the original pre-training method usually discards the MLP layer and uses it in downstream tasks A brand new MLP. This is to prevent the pre-trained MLP from being too biased towards the pre-training task, resulting in a decrease in the effectiveness of downstream tasks. However, in this paper, the authors propose a new approach. They use an independent MLP for each patch, and also use the fusion of representation and attention of each patch to replace the traditional pooling operation. In this way, the usability of the pre-trained MLP head in downstream tasks is improved. Through this method, the authors can better retain the information of the overall image and avoid the problem of over-reliance on pre-training tasks. This is very helpful to improve the generalization ability and adaptability of the model.

Regarding the optimization goal, the article tried two methods. The first one is to directly fit the patch pixels and use MSE for prediction. The second is to tokenize the image patch in advance, convert it into a classification task, and use cross-entropy loss. However, in the subsequent ablation experiments in the article, it was found that although the second method can also allow the model to be trained normally, the effect is not as good as that based on pixel granularity MSE.

3. Experimental results

The experimental part of the article analyzes in detail the effect of this autoregressive image model and the impact of each part on the effect.

First of all, as training progresses, the downstream image classification task becomes better and better, indicating that this pre-training method can indeed learn good image representation information.

Apple uses autoregressive language models to pre-train image modelsPicture

On the training data, training with a small data set will lead to overfitting, and using DFN-2B although the initial verification set loss is larger , but there is no obvious over-fitting problem.

Apple uses autoregressive language models to pre-train image modelsPicture

Regarding the design of each module of the model, the article also conducts a detailed ablation experiment analysis.

Apple uses autoregressive language models to pre-train image modelsPicture

In the final effect comparison, AIM achieved very good results, which also verified that this autoregressive pre-training method is effective It is also available on images and may become a main way to pre-train large models for subsequent images.

Apple uses autoregressive language models to pre-train image models picture

The above is the detailed content of Apple uses autoregressive language models to pre-train image models. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Tesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserTesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserApr 22, 2025 am 11:48 AM

Since 2008, I've championed the shared-ride van—initially dubbed the "robotjitney," later the "vansit"—as the future of urban transportation. I foresee these vehicles as the 21st century's next-generation transit solution, surpas

Sam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailSam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailApr 22, 2025 am 11:29 AM

Revolutionizing the Checkout Experience Sam's Club's innovative "Just Go" system builds on its existing AI-powered "Scan & Go" technology, allowing members to scan purchases via the Sam's Club app during their shopping trip.

Nvidia's AI Omniverse Expands At GTC 2025Nvidia's AI Omniverse Expands At GTC 2025Apr 22, 2025 am 11:28 AM

Nvidia's Enhanced Predictability and New Product Lineup at GTC 2025 Nvidia, a key player in AI infrastructure, is focusing on increased predictability for its clients. This involves consistent product delivery, meeting performance expectations, and

Exploring the Capabilities of Google's Gemma 2 ModelsExploring the Capabilities of Google's Gemma 2 ModelsApr 22, 2025 am 11:26 AM

Google's Gemma 2: A Powerful, Efficient Language Model Google's Gemma family of language models, celebrated for efficiency and performance, has expanded with the arrival of Gemma 2. This latest release comprises two models: a 27-billion parameter ver

The Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaThe Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaApr 22, 2025 am 11:21 AM

This Leading with Data episode features Dr. Kirk Borne, a leading data scientist, astrophysicist, and TEDx speaker. A renowned expert in big data, AI, and machine learning, Dr. Borne offers invaluable insights into the current state and future traje

AI For Runners And Athletes: We're Making Excellent ProgressAI For Runners And Athletes: We're Making Excellent ProgressApr 22, 2025 am 11:12 AM

There were some very insightful perspectives in this speech—background information about engineering that showed us why artificial intelligence is so good at supporting people’s physical exercise. I will outline a core idea from each contributor’s perspective to demonstrate three design aspects that are an important part of our exploration of the application of artificial intelligence in sports. Edge devices and raw personal data This idea about artificial intelligence actually contains two components—one related to where we place large language models and the other is related to the differences between our human language and the language that our vital signs “express” when measured in real time. Alexander Amini knows a lot about running and tennis, but he still

Jamie Engstrom On Technology, Talent And Transformation At CaterpillarJamie Engstrom On Technology, Talent And Transformation At CaterpillarApr 22, 2025 am 11:10 AM

Caterpillar's Chief Information Officer and Senior Vice President of IT, Jamie Engstrom, leads a global team of over 2,200 IT professionals across 28 countries. With 26 years at Caterpillar, including four and a half years in her current role, Engst

New Google Photos Update Makes Any Photo Pop With Ultra HDR QualityNew Google Photos Update Makes Any Photo Pop With Ultra HDR QualityApr 22, 2025 am 11:09 AM

Google Photos' New Ultra HDR Tool: A Quick Guide Enhance your photos with Google Photos' new Ultra HDR tool, transforming standard images into vibrant, high-dynamic-range masterpieces. Ideal for social media, this tool boosts the impact of any photo,

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools