Home >Technology peripherals >AI >​New generation attention mechanism Lightning Attention-2: unlimited sequence length, constant computing power overhead, higher modeling accuracy

​New generation attention mechanism Lightning Attention-2: unlimited sequence length, constant computing power overhead, higher modeling accuracy

PHPz
PHPzforward
2024-01-18 14:12:06708browse

The current application of large language models is restricted by the sequence length limit, which limits its application in the field of artificial intelligence. For example, there are certain challenges in multi-turn dialogue, long text understanding, and multi-modal data processing and generation. The fundamental reason for this limitation is that the Transformer architecture commonly used in large language models currently has a quadratic relationship between computational complexity and sequence length. Therefore, as the sequence length increases, the demand for computing resources increases exponentially. Therefore, how to efficiently process long sequences has always been one of the challenges faced by large language models.

Past approaches have mainly focused on adapting large language models to longer sequences during the inference stage. One approach is to employ Alibi or similar relative position encoding to enable the model to adapt to input sequences of different lengths. Another approach is to use RoPE or similar relative position encoding for difference, and briefly fine-tune the already trained model to extend the sequence length. These methods enable large models to have certain long sequence modeling capabilities, but the overhead of training and inference has not been reduced.

The OpenNLPLab team has open sourced a new linear attention mechanism called Lightning Attention-2, which is designed to solve the long sequence problem of large language models. This mechanism keeps the cost of training and inferencing long sequences consistent with 1K sequence lengths, enabling a set-and-forget solution. Even before hitting a memory bottleneck, increasing sequence length does not negatively impact model training speed, thus enabling unlimited length pretraining. In addition, the inference cost of very long texts is also consistent or even lower compared to 1K Tokens, thus greatly reducing the inference cost of current large language models. As shown in the figure below, when the model size is 400M, 1B and 3B, as the sequence length increases, the training speed of LLaMA supported by FlashAttention2 begins to decrease rapidly, while the speed of TansNormerLLM supported by Lightning Attention-2 has almost no change.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

figure 1

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

  • Paper: Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models
  • Paper address: https://arxiv.org/pdf/ 2401.04658.pdf
  • Open source address: https://github.com/OpenNLPLab/lightning-attention

Lightning Attention-2 Introduction

Keeping the pre-training speed of large models consistent across different sequence lengths sounds like an impossible task. However, since the advent of linear attention in 2020, researchers have been working hard to make the actual efficiency of linear attention consistent with its theoretical linear computational complexity. Until mid-2023, research on linear attention will mainly focus on accuracy alignment with the Transformer architecture. Finally, after the advent of the improved linear attention mechanism, it is comparable to the state-of-the-art Transformer architecture in accuracy. However, the most critical calculation trick of "left multiplication to right multiplication" in linear attention is much slower than the direct left multiplication algorithm in actual implementation. This is because the implementation of right multiplication requires the use of cumulative summation (cumsum) containing a large number of loop operations, and a large number of I/O operations make right multiplication much less efficient than left multiplication. Therefore, there is still a challenge to keep the pre-training speed of large models consistent across different sequence lengths. Researchers need to further explore and improve the implementation of linear attention to improve its computational efficiency and reduce I/O operations. This will help achieve consistency in pre-training speed to better cope with task requirements of different sequence lengths.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

Figure 2

In order to better understand the idea of ​​Lightning Attention-2, let us first Let’s review the calculation formula of traditional softmax attention: O=softmax ((QK^T)⊙M_) V, where Q, K, V, M, and O are query, key, value, mask and output matrix respectively. M here is In one-way tasks (such as GPT), it is a lower triangular all-1 matrix, but in two-way tasks (such as Bert), it can be ignored, that is, there is no mask matrix for two-way tasks.

The author summarizes the overall idea of ​​Lightning Attention-2 into the following three points for explanation:

1. One of the core ideas of Linear Attention is to remove the computationally expensive softmax operator, so that the calculation formula of Attention can be written as O=((QK^T)⊙M_) V. However, due to the existence of the mask matrix M in the one-way task, this form can still only perform left multiplication calculations, so the complexity of O (N) cannot be obtained. But for bidirectional tasks, since there is no mask matrix, the calculation formula of Linear Attention can be further simplified to O=(QK^T) V. The subtlety of Linear Attention is that by simply using the associative law of matrix multiplication, its calculation formula can be further transformed into: O=Q (K^T V). This calculation form is called right multiplication, and the corresponding former is Take the left. From Figure 2, we can intuitively understand that Linear Attention can achieve an attractive O (N) complexity in bidirectional tasks!

2. However, as the decoder-only GPT model gradually becomes the de facto standard of LLM, how to use the right multiplication feature of Linear Attention to accelerate one-way tasks has become an urgent problem to be solved. . In order to solve this problem, the author of this article proposed to use the idea of ​​​​"divide and conquer" to divide the calculation of the attention matrix into two forms: diagonal matrix and non-diagonal matrix, and use different ways to calculate them. As shown in Figure 3, Linear Attention-2 uses the Tiling idea commonly used in the computer field to divide the Q, K, and V matrices into the same number of blocks. Among them, the calculation of the block itself (intra-block) still retains the left multiplication calculation method due to the existence of the mask matrix, with a complexity of O (N^2); while the calculation of the block (inter-block) does not have the mask matrix. With the existence of , you can use the right multiplication calculation method to enjoy the complexity of O (N). After the two are calculated separately, they can be directly added to obtain the Linear Attention output Oi corresponding to the i-th block. At the same time, the state of KV is accumulated through cumsum for use in the calculation of the next block. In this way, the algorithm complexity of the entire Lightning Attention-2 is O (N^2) for intra-block and O (N) for inter-block Trade-off. How to obtain better trade-off is determined by Tiling's block size.

3. Careful readers will find that the above process is only the algorithm part of Lightning Attention-2. The reason why Lightning is named is because the author fully considered the algorithm process to be executed on GPU hardware. efficiency issues in the process. Inspired by the FlashAttention series of work, when actually performing calculations on the GPU, the author moved the split Q_i, K_i, V_i tensors from the slower HBM with larger capacity inside the GPU to the faster SRAM with smaller capacity. Computation is performed on the system, thereby reducing a large amount of memory IO overhead. After the block completes the calculation of Linear Attention, its output result O_i will be moved back to HBM. Repeat this process until all blocks have been processed.

Readers who want to know more details can carefully read Algorithm 1 and Algorithm 2 in this article, as well as the detailed derivation process in the paper. Both the Algorithm and the derivation process distinguish between the forward and reverse processes of Lightning Attention-2, which can help readers have a deeper understanding.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

image 3

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

Lightning Attention-2 Accuracy Comparison

The researchers first compared the accuracy difference between Lightning Attention-2 and Lightning Attention-1 on a small-scale (400M) parameter model, as shown below As shown, there is almost no difference between the two.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

The researchers then combined the Lightning Attention-2 supported TransNormerLLM (TNL-LA2) with other advanced non-Transformer architectures on 1B and 3B. The network and LLaMA powered by FlashAttention2 were compared under the same corpus. As shown in the figure below, TNL-LA2 and LLaMA maintain a similar trend, and the loss performance is better. This experiment shows that Lightning Attention-2 has an accuracy performance that is not inferior to the state-of-the-art Transformer architecture in language modeling.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

In the large language model task, the researchers compared the results of TNL-LA2 15B and Pythia on common benchmarks for large models of similar size. As shown in the table below, under the condition of eating the same tokens, TNL-LA2 is slightly higher than the Pythia model based on Softmax attention in common sense reasoning and multiple choice comprehensive capabilities.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

Lightning Attention-2 speed comparison

The researchers conducted a comparison between Lightning Attention-2 and FlashAttention2 Comparison of single module speed and memory usage. As shown in the figure below, compared to Lightning Attention-1 and FlashAttention2, Lightning Attention-2 shows a strict linear increase in speed compared to the sequence length. In terms of memory usage, all three show similar trends, but Lightning Attention-2 has a smaller memory footprint. The reason for this is that the memory usage of FlashAttention2 and Lightning Attention-1 is also approximately linear.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

The author noticed that the main focus of this article is to solve the training speed of linear attention network and realize long sequences of arbitrary length. Similar training speed to 1K sequences. In terms of inference speed, there is not much introduction. This is because linear attention can be losslessly converted to RNN mode during reasoning, thereby achieving a similar effect, that is, the speed of reasoning for a single token is constant. For Transformer, the inference speed of the current token is related to the number of tokens before it.

The author tested the comparison of the inference speed between TransNormerLLM-7B supported by Lightning Attention-1 and the common 7B model. As shown in the figure below, under the approximate parameter size, the throughput speed of Lightning Attention-1 is 4 times that of Baichuan and more than 3.5 times that of ChatGLM, showing an excellent inference speed advantage.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

Summary

Lightning Attention-2 represents a major advancement in linear attention mechanisms, making it It can perfectly replace the traditional Softmax attention in terms of accuracy and speed, providing sustainable expansion capabilities for larger and larger models in the future, and providing a way to process infinitely long sequences with higher efficiency. The OpenNLPLab team will study sequential parallel algorithms based on linear attention mechanisms in the future to solve the currently encountered memory barrier problem.

The above is the detailed content of ​New generation attention mechanism Lightning Attention-2: unlimited sequence length, constant computing power overhead, higher modeling accuracy. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete