


The Transformer architecture has swept across many fields including natural language processing, computer vision, speech, multi-modality, etc. However, the experimental results are currently very impressive, and the relevant research on the working principle of Transformer is still very limited.
The biggest mystery is why Transformer can emerge efficient representations from gradient training dynamics by relying only on a "simple prediction loss"?
Recently Dr. Tian Yuandong announced the team’s latest research results. In a mathematically rigorous way, he analyzed the performance of a layer of Transformer (a self-attention layer plus a decoder layer) in the next token prediction task. SGD training dynamics on.
## Paper link: https://arxiv.org/abs/2305.16380
This paper opens the black box of the dynamic process of how self-attention layers combine input tokens, and reveals the nature of potential inductive bias.
Specifically, under the assumption that there is no position encoding, long input sequences, and the decoder layer learns faster than the self-attention layer, the researchers proved that self-attention is a Discriminative scanning algorithm :
Starting from uniform attention (uniform attention), for the specific next token to be predicted, the model Gradually pay attention to different key tokens, and pay less attention to common tokens that appear in multiple next token windows
For different tokens, the model will gradually reduce the attention weight, following the training The order of co-occurrence between concentrated key tokens and query tokens from low to high.
What’s interesting is that this process does not lead to a winner-take-all, but is slowed down by a phase transition controlled by the two-layer learning rate, and finally becomes an (almost) fixed token combination. This dynamic is also verified on synthetic and real-world data.
Dr. Tian Yuandong is a researcher and research manager at the Meta Artificial Intelligence Research Institute and the leader of the Go AI project. His research directions are deep reinforcement learning and its application in games, as well as deep learning models theoretical analysis. He received his bachelor's and master's degrees from Shanghai Jiao Tong University in 2005 and 2008, and his doctorate from the Robotics Institute of Carnegie Mellon University in the United States in 2013.
was nominated for the 2013 International Conference on Computer Vision (ICCV) Marr Prize Honorable Mentions and the ICML2021 Outstanding Paper Honorable Mention Award.
After graduating from the Ph.D., he published a series of "Five-Year Doctoral Summary", covering aspects such as research direction selection, reading accumulation, time management, work attitude, income and sustainable career development. Summary of thoughts and experiences on doctoral career.
Revealing the 1-layer TransformerThe pre-training model based on the Transformer architecture usually only includes very simple supervision tasks, such as predicting the next word, filling in the blanks, etc., but it can Providing very rich representations for downstream tasks is mind-boggling.
Although previous work has proven that Transformer is essentially a universal approximator, previously commonly used machine learning models, such as kNN, kernel SVM, and multi-layer perceptron etc. are actually universal approximators. This theory cannot explain the huge gap in performance between these two types of models.
Researchers believe that it is important to understand the training dynamics of Transformer, that is, in During training, you can learn how parameters change over time.
The article first uses a rigorous mathematical definition to formally describe the training dynamics of SGD with a layer of positionless coding Transformer on the next token prediction (a commonly used training paradigm for GPT series models). .
The Transformer of layer 1 contains a softmax self-attention layer and a decoder layer that predicts the next token.
Assuming that the sequence is long and the decoder learns faster than the self-attention layer, prove The dynamic behavior of self-attention during training:
1. Frequency Bias
The model will gradually pay attention to those key tokens that co-occur with the query token in large quantities, and reduce its attention to those tokens that co-occur less.
2. Discriminative Bias
The model pays more attention to those to be predicted next The only unique token that appears in the next token, and loses interest in those common tokens that appear in multiple next tokens.
These two characteristics show that self-attention implicitly runs a discriminative scanning algorithm and has an inductive bias, that is, it is biased towards Unique key tokens that often co-occur with query tokens
Additionally, although self-attention layers tend to become sparser during training, as the frequency bias suggests, the model Because of the phase transition in the training dynamics, it does not collapse into one hot.
The final stage of learning does not converge to any saddle point with zero gradient, but instead enters an attention change Slow region (i.e. logarithm over time), with parameter freezing and learned.
The research results further show that the onset of phase transition is controlled by the learning rate: a large learning rate will produce sparse attention patterns, while under a fixed self-attention learning rate , a large decoder learning rate leads to faster phase transitions and dense attention patterns.
The researchers named the SGD dynamics discovered in their work scan and snap:
Scan phase: Self attention is focused on key tokens, that is, different tokens that often appear at the same time as the next predicted token; attention on all other tokens decreases.
snap stage: Attention is almost frozen, and the token combination is fixed.
This phenomenon has also been verified in simple real-world data experiments, using SGD trained on WikiText 1 Observing the lowest self-attention layer of the layer and the 3-layer Transformer, we can find that even if the learning rate remains constant throughout the training process, the attention will freeze at a certain moment during the training process and become sparse.
The above is the detailed content of Tian Yuandong's new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious. For more information, please follow other related articles on the PHP Chinese website!

Running large language models at home with ease: LM Studio User Guide In recent years, advances in software and hardware have made it possible to run large language models (LLMs) on personal computers. LM Studio is an excellent tool to make this process easy and convenient. This article will dive into how to run LLM locally using LM Studio, covering key steps, potential challenges, and the benefits of having LLM locally. Whether you are a tech enthusiast or are curious about the latest AI technologies, this guide will provide valuable insights and practical tips. Let's get started! Overview Understand the basic requirements for running LLM locally. Set up LM Studi on your computer

Guy Peri is McCormick’s Chief Information and Digital Officer. Though only seven months into his role, Peri is rapidly advancing a comprehensive transformation of the company’s digital capabilities. His career-long focus on data and analytics informs

Introduction Artificial intelligence (AI) is evolving to understand not just words, but also emotions, responding with a human touch. This sophisticated interaction is crucial in the rapidly advancing field of AI and natural language processing. Th

Introduction In today's data-centric world, leveraging advanced AI technologies is crucial for businesses seeking a competitive edge and enhanced efficiency. A range of powerful tools empowers data scientists, analysts, and developers to build, depl

This week's AI landscape exploded with groundbreaking releases from industry giants like OpenAI, Mistral AI, NVIDIA, DeepSeek, and Hugging Face. These new models promise increased power, affordability, and accessibility, fueled by advancements in tr

But the company’s Android app, which offers not only search capabilities but also acts as an AI assistant, is riddled with a host of security issues that could expose its users to data theft, account takeovers and impersonation attacks from malicious

You can look at what’s happening in conferences and at trade shows. You can ask engineers what they’re doing, or consult with a CEO. Everywhere you look, things are changing at breakneck speed. Engineers, and Non-Engineers What’s the difference be

Simulate Rocket Launches with RocketPy: A Comprehensive Guide This article guides you through simulating high-power rocket launches using RocketPy, a powerful Python library. We'll cover everything from defining rocket components to analyzing simula


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 Linux new version
SublimeText3 Linux latest version

Dreamweaver Mac version
Visual web development tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

SublimeText3 Mac version
God-level code editing software (SublimeText3)