Home  >  Article  >  Technology peripherals  >  Meta unlimited long text large model is here: only 7B parameters, open source

Meta unlimited long text large model is here: only 7B parameters, open source

WBOY
WBOYforward
2024-04-18 15:40:01960browse
After Google, Meta also comes to roll infinitely long contexts.

The quadratic complexity and weak length extrapolation of Transformers limits their ability to scale to long sequences, although there are linear attention forces and state space models such as quadratic solutions, but from past experience, they perform poorly in terms of pre-training efficiency and downstream task accuracy.

Recently, Infini-Transformer proposed by Google has introduced an effective method to extend Transformer-based large language models (LLM) to infinitely long inputs without increasing storage and computing requirements, attracting people's attention.

Almost at the same time, Meta also proposed an infinitely long text technology.

Meta unlimited long text large model is here: only 7B parameters, open source

  • Paper address: https://arxiv.org/pdf/2404.08801.pdf

  • Paper Title: MEGALODON: Efficient LLM Pretraining and Inference with Unlimited Context Length

  • Code: https://github.com/XuezheMax/megalodon

In a paper submitted on April 12, institutions from Meta, University of Southern California, CMU, UCSD and other institutions introduced MEGALODON, a neural architecture for efficient sequence modeling with unlimited context length.

MEGALODON further develops the structure of MEGA (Exponential Moving Average with Gated Attention) and introduces a variety of technical components to improve its capabilities and stability, including Complex Exponential Moving Average (CEMA), Time-step normalization layer, normalized attention mechanism and pre-normalized residual connection with two features.

Meta unlimited long text large model is here: only 7B parameters, open source

In a direct comparison with LLAMA2, MEGALODON achieves better efficiency than Transformer at a scale of 7 billion parameters and 2 trillion training tokens. The training loss of MEGALODON reaches 1.70, which is between LLAMA2-7B (1.75) and 13B (1.67). MEGALODON's improvements over Transformers show strong performance across a range of benchmarks across different tasks and modalities.

MEGALODON is essentially an improved MEGA architecture (Ma et al., 2023), which utilizes the gated attention mechanism and the classic exponential moving average (EMA) method. In order to further improve MEGALODON's capabilities and efficiency in large-scale long context pre-training, the authors proposed a variety of technical components. First, MEGALODON introduces a complex exponential moving average (CEMA) component that extends the multidimensional damped EMA in MEGA to the complex domain. Second, MEGALODON proposes a time-step normalization layer, which generalizes group normalization layers to autoregressive sequence modeling tasks to allow normalization along the sequential dimension.

In order to improve the stability of large-scale pre-training, MEGALODON further proposes normalized attention, as well as a two-hop residual configuration by modifying the widely adopted pre-normalization and post-normalization methods. Pre-normalization. By simply chunking the input sequence into fixed chunks, as done in MEGA-chunk, MEGALODON achieves linear computational and memory complexity in model training and inference.

In a direct comparison with LLAMA2, while controlling for data and computation, MEGALODON-7B significantly outperforms the state-of-the-art Transformer variant used to train LLAMA2-7B in terms of training perplexity. Evaluations on long context modeling, including perplexity in various context lengths up to 2M and long context QA tasks in Scrolls, demonstrate MEGALODON's ability to model infinite length sequences. Additional experimental results on small and medium-sized benchmarks, including LRA, ImageNet, Speech Commands, WikiText-103, and PG19 demonstrate MEGALODON's capabilities at volume and multimodality.

Method Introduction

First of all, the article briefly reviews the key components in the MEGA (Moving Average Equipped Gated Attention) architecture and discusses the problems existing in MEGA.

MEGA embeds an EMA (exponential moving average) component into the calculation of the attention matrix to incorporate inductive bias across time step dimensions. Specifically, multidimensional damped EMA first expands each dimension of the input sequence X into h dimensions individually by expanding the matrix Meta unlimited long text large model is here: only 7B parameters, open source, and then applies the damped EMA to the h-dimensional hidden space. The form is as follows:

Meta unlimited long text large model is here: only 7B parameters, open source

#In order to reduce the quadratic complexity in the full attention mechanism, MEGA simply splits the query, key and value sequence in (14-16) is a block of length c. The attention in (17) is applied to each block individually, yielding linear complexity O (kc^2 ) = O (nc).

Technically speaking, the EMA sub-layer in MEGA helps capture local contextual information near each token, thereby mitigating the problem of losing information in context beyond block boundaries. Although MEGA achieves impressive results, it faces the following problems:

i) Due to the limited expressive power of the EMA sub-layer in MEGA, the performance of MEGA with block-level attention still lags behind that of full-attention MEGA.

ii) For different tasks and data types, there may be architectural differences in the final MEGA architecture, such as different normalization layers, normalization modes and attention functions f (・).

iii) There is no empirical evidence that MEGA scales for large-scale pre-training.

Meta unlimited long text large model is here: only 7B parameters, open source

Meta unlimited long text large model is here: only 7B parameters, open source

CEMA: Extend multidimensional damping EMA to the complex domain

In order to solve the problems faced by MEGA The question that the study raised was MEGALODON.

Specifically, they creatively proposed CEMA (complex exponential moving average), rewriting the above equation (1) into the following form:

Meta unlimited long text large model is here: only 7B parameters, open source

And parameterize θ_j in (2) as:

Meta unlimited long text large model is here: only 7B parameters, open source

Timestep normalization

Although the performance of layer normalization combined with the Transformer is impressive, it is clear that layer normalization cannot directly reduce the internal covariate shift along the spatial dimension (also called the time step or sequence dimension).

In MEGALODON, this study extends group normalization to the autoregressive case by calculating the cumulative mean and variance.

Meta unlimited long text large model is here: only 7B parameters, open source

Figure 2 illustrates layer normalization and time step normalization.

Meta unlimited long text large model is here: only 7B parameters, open source

Normalized attention in MEGALODON

In addition, this study also proposes normalization specifically customized for MEGA attention mechanism to improve its stability. The form is as follows:

Meta unlimited long text large model is here: only 7B parameters, open source

Then the attention operation in the above formula (17) is changed to:

Meta unlimited long text large model is here: only 7B parameters, open source

Pre-Norm with Two-hop residuals

Through investigation, it was found that expanding the model size will cause pre-normalization instability. The pre-normalization based on the Transformer block can be expressed as (shown in Figure 3 (b)):

Meta unlimited long text large model is here: only 7B parameters, open source

Meta unlimited long text large model is here: only 7B parameters, open source

in the original MEGA architecture In , φ (19) is used in the gated residual connection (21) to alleviate this problem. However, the update gate φ introduces more model parameters, and the instability problem still exists when the model size is expanded to 7 billion. MEGALODON introduces a new configuration called pre-norm with two-hop residuals, which simply rearranges the residual connections in each block, as shown in Figure 3(c):

Meta unlimited long text large model is here: only 7B parameters, open source

Meta unlimited long text large model is here: only 7B parameters, open source

Experiment

In order to evaluate the scalability and efficiency of MEGALODON in long context sequence modeling, this article Scale MEGALODON to 7 billion scale.

LLM pre-training

In order to improve data efficiency, the researchers showed the negative logarithmic similarity of MEGALODON-7B, LLAMA2-7B and LLAMA2-13B during the training process. However (NLL), as shown in Figure 1.

Under the same number of training tokens, MEGALODON-7B obtained significantly better (lower) NLL than LLAMA2-7B, showing better data efficiency.

Meta unlimited long text large model is here: only 7B parameters, open source

Figure 4 illustrates the average WPS (word/token per second) per device for LLAMA2-7B and MEGALODON-7B using 4K and 32K context lengths respectively. For the LLAMA2 model, the study uses Flash-Attention V2 to accelerate the calculation of full attention. At 4K context length, MEGALODON-7B is slightly slower (~6%) than LLAMA2-7B due to the introduction of CEMA and time step normalization. When extending the context length to 32K, MEGALODON-7B is significantly faster than LLAMA2-7B (about 32%), which demonstrates the computational efficiency of MEGALODON for long context pre-training.

Meta unlimited long text large model is here: only 7B parameters, open source

Short context evaluation

Table 1 summarizes the results of MEGALODON and LLAMA2 on academic benchmarks, as well as other open source base models, Includes comparison results for MPT, RWKV, Mamba, Mistral and Gemma. After pre-training on the same 2T tokens, MEGALODON-7B outperforms LLAMA2-7B on all benchmarks. On some tasks, MEGALODON-7B's performance is comparable to or even better than that of LLAMA2-13B.

Meta unlimited long text large model is here: only 7B parameters, open source

Long context evaluation

Figure 5 shows the perplexity of the validation data set under various context lengths from 4K to 2M ( PPL). It can be observed that the PPL decreases monotonically with the context length, validating the effectiveness and robustness of MEGALODON in modeling extremely long sequences.

Meta unlimited long text large model is here: only 7B parameters, open source

Instruction fine-tuning

Table 3 summarizes the performance of the 7B model on MT-Bench. MEGALODON shows superior performance on MT-Bench compared to Vicuna and is comparable to LLAMA2-Chat, which utilizes RLHF for further alignment fine-tuning.

Meta unlimited long text large model is here: only 7B parameters, open source

Medium-Scale Benchmark Evaluation

To evaluate the performance of MEGALODON on the image classification task, this study performed on the Imagenet-1K dataset Experiments were conducted on. Table 4 reports the Top-1 accuracy on the validation set. The accuracy of MEGALODON is 1.3% higher than DeiT-B and 0.8% higher than MEGA.

Meta unlimited long text large model is here: only 7B parameters, open source

Table 5 illustrates the word-level perplexity (PPL) of MEGALODON on PG-19, and its comparison with previous state-of-the-art models, including Compressive Transformer, Perceiver AR, and Perceiver Comparison of AR, block loop Transformer and MEGABYTE, etc. MEGALODON performance is clearly ahead.

Meta unlimited long text large model is here: only 7B parameters, open source

For more details, please refer to the original text of the paper.

The above is the detailed content of Meta unlimited long text large model is here: only 7B parameters, open source. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:jiqizhixin.com. If there is any infringement, please contact admin@php.cn delete