Home >Technology peripherals >AI >DeepMind said: AI models need to lose weight, and autoregression becomes the main trend

DeepMind said: AI models need to lose weight, and autoregression becomes the main trend

WBOY
WBOYforward
2023-04-27 16:49:08898browse

Autoregressive attention programs with Transformer as the core have always been difficult to overcome the difficulty of scale. To this end, DeepMind/Google recently established a new project to propose a good way to help such programs effectively slim down.

DeepMind said: AI models need to lose weight, and autoregression becomes the main trend

The Perceiver AR architecture created by DeepMind and Google Brain avoids a resource-intensive task - calculating the combined properties of input and output into the latent space. Instead, they introduced “causal masking” into the latent space, thereby achieving the autoregressive order of a typical Transformer.

One of the most impressive development trends in the field of artificial intelligence/deep learning is that the size of models is getting larger and larger. Experts in the field say that since scale is often directly linked to performance, this wave of volume expansion is likely to continue.

However, the scale of the project is getting larger and larger, and the resources consumed are naturally increasing, which has caused deep learning to raise new issues at the social and ethical level. This dilemma has attracted the attention of mainstream scientific journals such as Nature.

Because of this, we may have to return to the old word "efficiency" - AI programs. Is there any room for further efficiency improvement?

Scientists from DeepMind and Google Brain departments have recently modified the neural network Perceiver they launched last year, hoping to improve its efficiency in using computing resources.

The new program is named Perceiver AR. The AR here originates from "autoregressive", which is also another development direction of more and more deep learning programs today. Autoregression is a technique that allows the machine to use the output as a new input to the program. It is a recursive operation, thereby forming an attention map in which multiple elements are related to each other.

The popular neural network Transformer launched by Google in 2017 also has this autoregressive characteristic. In fact, the later GPT-3 and the first version of Perceiver continued the autoregressive technical route.

Before Perceiver AR, Perceiver IO, launched in March this year, was the second version of Perceiver. Going back further, it was the first version of Perceiver released this time last year.

The initial innovation of Perceiver is to use Transformer and make adjustments so that it can flexibly absorb various inputs, including text, sound and images, thereby breaking away from dependence on specific types of input. This allows researchers to develop neural networks using multiple input types.

As a member of the trend of the times, Perceiver, like other model projects, has begun to use the autoregressive attention mechanism to mix different input modes and different task domains. Such use cases also include Google’s Pathways, DeepMind’s Gato, and Meta’s data2vec.

In March of this year, Andrew Jaegle, the creator of the first version of Perceiver, and his team of colleagues released the "IO" version. The new version enhances the output types supported by Perceiver, enabling a large number of outputs containing a variety of structures, including text language, optical flow fields, audio-visual sequences and even unordered sets of symbols, etc. Perceiver IO can even generate operating instructions in the game "StarCraft 2".

In this latest paper, Perceiver AR has been able to implement general autoregressive modeling for long contexts. However, during the research, Jaegle and his team also encountered new challenges: how to scale the model when dealing with various multi-modal input and output tasks.

The problem is that the autoregressive quality of the Transformer, and any program that similarly builds input-to-output attention maps, requires massive distribution sizes of up to hundreds of thousands of elements.

This is the fatal weakness of the attention mechanism. More precisely, everything needs to be attended to in order to build up the probability distribution of the attention map.

As Jaegle and his team mentioned in the paper, as the number of things that need to be compared with each other increases in the input, the model's consumption of computing resources will become increasingly exaggerated:

This kind of There is a conflict between long context structures and the computational nature of the Transformer. Transformers repeatedly perform self-attention operations on the input, which causes computational requirements to grow both quadratically with input length and linearly with model depth. The more input data there is, the more input tags corresponding to the observed data content, the patterns in the input data become more subtle and complex, and deeper layers must be used to model the generated patterns. Due to limited computing power, Transformer users are forced to either truncate the model input (preventing the observation of more distant patterns) or limit the depth of the model (thus depriving it of the expressive ability to model complex patterns).

In fact, the first version of Perceiver also tried to improve the efficiency of Transformers: not directly performing attention, but performing attention on the potential representation of the input. In this way, the computing power requirements of processing large input arrays can be "(decoupled) from the computing power requirements corresponding to large deep networks."

DeepMind said: AI models need to lose weight, and autoregression becomes the main trend

Comparison between Perceiver AR, standard Transformer deep network, and enhanced Transformer XL.

In the latent part, the input representation is compressed and thus becomes a more efficient attention engine. This way, "with deep networks, most of the computation actually happens on the self-attention stack," rather than having to operate on countless inputs.

But the challenge still exists, because the underlying representation does not have the concept of order, so Perceiver cannot generate output like Transformer. The order is crucial in autoregression, and each output should be the product of the input before it, not the product after it.

But since each latent model pays attention to all inputs regardless of their location, "for autoregressive generation that requires that each model output must depend only on its previous input," the researchers write, Perceiver will not be directly applicable."

As for Perceiver AR, the research team went one step further and inserted the sequence into Perceiver to enable automatic regression.

The key here is to perform so-called "causal masking" on the input and latent representation. On the input side, causal masking performs "cross-attention," while on the underlying representation side it forces the program to pay attention only to what comes before a given symbol. This method restores the directivity of the Transformer and can still significantly reduce the total calculation amount.

The result is that Perceiver AR can achieve modeling results comparable to Transformer based on more inputs, but the performance is greatly improved.

They write, “Perceiver AR can perfectly identify and learn long context patterns that are at least 100k tokens apart in the synthetic copying task.” In comparison, Transformer has a hard limit of 2048 tokens. The more there are, the longer the context will be and the more complex the program output will be.

Compared with the Transformer and Transformer-XL architectures that widely use pure decoders, Perceiver AR is more efficient and can flexibly change the actual computing resources used during testing according to the target budget.

The paper writes that under the same attention conditions, the wall clock time to calculate Perceiver AR is significantly shorter, and it can absorb more context (i.e. more input symbols) under the same computing power budget:

The context length of Transformer is limited to 2048 tokens, which is equivalent to only supporting 6 layers - because larger models and longer contexts require a huge amount of memory. Using the same 6-layer configuration, we can extend the total context length of Transformer-XL memory to 8192 tokens. Perceiver AR can extend the context length to 65k markers, and with further optimization, it is expected to even exceed 100k.

All of this makes computing more flexible: “We can better control the amount of calculations a given model generates during testing, allowing us to achieve a stable balance between speed and performance. ."

Jaegle and colleagues also wrote that this approach works for any input type and is not limited to word symbols. For example pixels in images can be supported:

The same process works for any input that can be sorted, as long as causal masking techniques are applied. For example, the RGB channels of an image can be sorted in raster scan order by decoding the R, G, and B color channels of each pixel in the sequence, in order or out of order.

The authors found great potential in Perceiver and wrote in the paper, "Perceiver AR is an ideal candidate for long-context general-purpose autoregressive models."

But if you want to pursue For higher computational efficiency, another additional instability factor needs to be addressed. The authors point out that the research community has also recently attempted to reduce the computational requirements of autoregressive attention through "sparsity" (that is, the process of limiting the importance assigned to some input elements).

DeepMind said: AI models need to lose weight, and autoregression becomes the main trend

In the same wall clock time, Perceiver AR is able to run more from the input with the same number of layers. symbols, or significantly shorten the computation time with the same number of input symbol runs. The authors believe that this excellent flexibility may lead to a general efficiency improvement method for large networks.

But sparsity also has its own shortcomings, the main one being that it is too rigid. The paper writes, "The disadvantage of using sparsity methods is that this sparsity must be created by manual adjustment or heuristic methods. These heuristics are often only applicable to specific fields and are often difficult to adjust." OpenAI and NVIDIA in 2019 The Sparse Transformer released in 2017 is a sparse project.

They explained, “In contrast, our work does not require manual creation of sparse patterns on the attention layer, but allows the network to autonomously learn which long-context inputs require more attention and need to be passed through. The network propagates."

The paper also adds, "The initial cross-attention operation reduces the number of positions in the sequence and can be viewed as a form of sparse learning."

With this The sparsity itself learned in this way may become another powerful tool in the deep learning model toolkit in the next few years.

The above is the detailed content of DeepMind said: AI models need to lose weight, and autoregression becomes the main trend. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete