Home  >  Article  >  Technology peripherals  >  Tian Yuandong's new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious

Tian Yuandong's new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious

王林
王林forward
2023-06-12 13:56:091196browse

The Transformer architecture has swept across many fields including natural language processing, computer vision, speech, multi-modality, etc. However, the experimental results are currently very impressive, and the relevant research on the working principle of Transformer is still very limited.

The biggest mystery is why Transformer can emerge efficient representations from gradient training dynamics by relying only on a "simple prediction loss"?

Recently Dr. Tian Yuandong announced the team’s latest research results. In a mathematically rigorous way, he analyzed the performance of a layer of Transformer (a self-attention layer plus a decoder layer) in the next token prediction task. SGD training dynamics on.

Tian Yuandongs new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious

## Paper link: https://arxiv.org/abs/2305.16380

This paper opens the black box of the dynamic process of how self-attention layers combine input tokens, and reveals the nature of potential inductive bias.

Specifically, under the assumption that there is no position encoding, long input sequences, and the decoder layer learns faster than the self-attention layer, the researchers proved that self-attention is a Discriminative scanning algorithm :

Starting from uniform attention (uniform attention), for the specific next token to be predicted, the model Gradually pay attention to different key tokens, and pay less attention to common tokens that appear in multiple next token windows

For different tokens, the model will gradually reduce the attention weight, following the training The order of co-occurrence between concentrated key tokens and query tokens from low to high.

What’s interesting is that this process does not lead to a winner-take-all, but is slowed down by a phase transition controlled by the two-layer learning rate, and finally becomes an (almost) fixed token combination. This dynamic is also verified on synthetic and real-world data.

Dr. Tian Yuandong is a researcher and research manager at the Meta Artificial Intelligence Research Institute and the leader of the Go AI project. His research directions are deep reinforcement learning and its application in games, as well as deep learning models theoretical analysis. He received his bachelor's and master's degrees from Shanghai Jiao Tong University in 2005 and 2008, and his doctorate from the Robotics Institute of Carnegie Mellon University in the United States in 2013.

was nominated for the 2013 International Conference on Computer Vision (ICCV) Marr Prize Honorable Mentions and the ICML2021 Outstanding Paper Honorable Mention Award.

After graduating from the Ph.D., he published a series of "Five-Year Doctoral Summary", covering aspects such as research direction selection, reading accumulation, time management, work attitude, income and sustainable career development. Summary of thoughts and experiences on doctoral career.

Revealing the 1-layer Transformer

The pre-training model based on the Transformer architecture usually only includes very simple supervision tasks, such as predicting the next word, filling in the blanks, etc., but it can Providing very rich representations for downstream tasks is mind-boggling.

Although previous work has proven that Transformer is essentially a universal approximator, previously commonly used machine learning models, such as kNN, kernel SVM, and multi-layer perceptron etc. are actually universal approximators. This theory cannot explain the huge gap in performance between these two types of models.

Tian Yuandongs new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious

Researchers believe that it is important to understand the training dynamics of Transformer, that is, in During training, you can learn how parameters change over time.

The article first uses a rigorous mathematical definition to formally describe the training dynamics of SGD with a layer of positionless coding Transformer on the next token prediction (a commonly used training paradigm for GPT series models). .

The Transformer of layer 1 contains a softmax self-attention layer and a decoder layer that predicts the next token.

Tian Yuandongs new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious

Assuming that the sequence is long and the decoder learns faster than the self-attention layer, prove The dynamic behavior of self-attention during training:

1. Frequency Bias

The model will gradually pay attention to those key tokens that co-occur with the query token in large quantities, and reduce its attention to those tokens that co-occur less.

2. Discriminative Bias

The model pays more attention to those to be predicted next The only unique token that appears in the next token, and loses interest in those common tokens that appear in multiple next tokens.

These two characteristics show that self-attention implicitly runs a discriminative scanning algorithm and has an inductive bias, that is, it is biased towards Unique key tokens that often co-occur with query tokens

Additionally, although self-attention layers tend to become sparser during training, as the frequency bias suggests, the model Because of the phase transition in the training dynamics, it does not collapse into one hot.

Tian Yuandongs new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious

The final stage of learning does not converge to any saddle point with zero gradient, but instead enters an attention change Slow region (i.e. logarithm over time), with parameter freezing and learned.

The research results further show that the onset of phase transition is controlled by the learning rate: a large learning rate will produce sparse attention patterns, while under a fixed self-attention learning rate , a large decoder learning rate leads to faster phase transitions and dense attention patterns.

The researchers named the SGD dynamics discovered in their work scan and snap:

Scan phase: Self attention is focused on key tokens, that is, different tokens that often appear at the same time as the next predicted token; attention on all other tokens decreases.

snap stage: Attention is almost frozen, and the token combination is fixed.

Tian Yuandongs new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious

This phenomenon has also been verified in simple real-world data experiments, using SGD trained on WikiText 1 Observing the lowest self-attention layer of the layer and the 3-layer Transformer, we can find that even if the learning rate remains constant throughout the training process, the attention will freeze at a certain moment during the training process and become sparse.

The above is the detailed content of Tian Yuandong's new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete