search
HomeTechnology peripheralsAIDeepMind upgrades Transformer, forward pass FLOPs can be reduced by up to half

Introducing hybrid depth, DeepMind’s new design can greatly improve Transformer efficiency.


Needless to say, the importance of Transformer. There are currently many research teams working on improving this transformative technology. One of the important improvement directions is to improve the performance of Transformer. Efficiency, such as allowing it to have adaptive computing capabilities, thereby saving unnecessary calculations.

As Illiya Polosukhin, one of the proposers of the Transformer architecture and co-founder of NEAR Protocol, said in a conversation with Jen-Hsun Huang not long ago: "Adaptive computing is the next step. It must appear. We need to pay attention to how much computing resources are spent on specific problems." DeepMind upgrades Transformer, forward pass FLOPs can be reduced by up to half
##In fact, humans are born with the ability to adapt to computing - people are solving problems. Different problems will naturally allocate different amounts of time and energy.

#The same should be true for language modeling. In order to obtain accurate prediction results, it is not necessary to invest the same time or resources for all tokens and sequences. However, the Transformer model spends the same amount of computation for each token in a forward pass. This makes people lament: most of the calculations are wasted!
DeepMind upgrades Transformer, forward pass FLOPs can be reduced by up to half
#Ideally, you can reduce the Transformer's computational budget if you can avoid performing unnecessary calculations.

#Conditional calculation is a technique that reduces the total amount of calculations by performing calculations only when they are needed. Many researchers have previously proposed various algorithms that can evaluate when calculations are performed and how much calculations are used.

However, for this challenging problem, commonly used solution forms may not cope well with existing hardware limitations because they tend to introduce dynamic computation graphs . Instead, the most promising conditional computation methods may be those that make consistent use of the current hardware stack, prioritizing the use of static computation graphs and known tensor sizes chosen based on maximum utilization of the hardware.

Recently, Google DeepMind has studied this problem. They hope to use a lower computing budget to reduce the amount of calculation used by Transformer.
DeepMind upgrades Transformer, forward pass FLOPs can be reduced by up to half
  • Paper title: Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
  • Paper address: https://arxiv.org/pdf/2404.02258.pdf

They envisioned: In each layer, the network must learn Decisions are made for each token to dynamically allocate the available computing budget. In their specific implementation, the total computational effort is set by the user before training and never changed, rather than being a function of the network's execution decisions as it works. This allows hardware efficiency gains (such as reduced memory footprint or reduced FLOPs per forward pass) to be anticipated and exploited in advance. The team's experiments show that these gains can be achieved without compromising overall network performance.

#The team at DeepMind adopts an approach similar to the Mixed Expert (MoE) Transformer, where dynamic token-level routing decisions are performed across the entire network depth.

Unlike MoE, their choice here is: either apply the calculation to the token (the same as the standard Transformer), or wrap it around through a residual connection pass it (leave it unchanged, save computation). Another difference from MoE is that this routing mechanism is used for both MLP and multi-head attention. Therefore, this also affects the keys and queries handled by the network, so the route not only decides which tokens are updated, but also which tokens are available for attention.

DeepMind named this strategy Mixture-of-Depths (MoD) to highlight the fact that each token passes through a different number of layers or modules at the Transformer depth . We translate it here as "mixing depth", see Figure 1.
DeepMind upgrades Transformer, forward pass FLOPs can be reduced by up to half
#MoD supports users to weigh performance and speed. On the one hand, users can train the MoD Transformer with the same training FLOPs as a regular Transformer, which can bring up to a 1.5% improvement in the final log-probability training target.The MoD Transformer, on the other hand, uses less computation to achieve the same training loss as a regular Transformer—up to 50% fewer FLOPs per forward pass.

These results show that MoD Transformer can learn to route intelligently (i.e. skip unnecessary computations).

Implementing Mixed Depth (MoD) Transformer

In summary, the strategy is as follows :

  • Set a static calculation budget that is lower than the amount of calculation required by the equivalent regular Transformer; this is done by limiting the amount of calculations in the sequence The number of tokens that can participate in module calculations (ie, self-attention module and subsequent MLP). For example, a regular Transformer may allow all tokens in the sequence to participate in self-attention calculations, but the MoD Transformer can limit the use of only 50% of the tokens in the sequence.
  • For each token, there is a routing algorithm in each module that gives a scalar weight; this weight represents the routing preference for each token - whether to participate in the calculation of the module or to bypass it. past.
  • In each module, find the top k largest scalar weights, and their corresponding tokens will participate in the calculation of the module. Since only k tokens must participate in the calculation of this module, its calculation graph and tensor size are static during the training process; these tokens are dynamic and context-related tokens recognized by the routing algorithm.

##Routing options

The team considered two The learned routing scheme (see Figure 2): token selection type and expert selection type.
DeepMind upgrades Transformer, forward pass FLOPs can be reduced by up to half
In a token-selective routing scheme, the routing algorithm generates a probability distribution for each token across computational paths (such as across expert identities in MoE Transformer). Tokens are then sent to their preferred path (i.e. the path with the highest probability), and the auxiliary loss ensures that all tokens do not converge to the same path. Token-selective routing may have load balancing issues because tokens are not ensured to be properly divided among possible paths.

Expert selective routing reverses the above scheme: instead of letting tokens choose their preferred paths, each path selects the top k tokens based on token preferences. (top-k). This ensures perfect load balancing as each path is always guaranteed k tokens. However, this may also lead to some tokens being over- or under-processed, because some tokens may be in the top k of multiple paths, and other tokens may have no corresponding path.

DeepMind’s choice is to use expert-selective routing for three reasons.

First, it requires no auxiliary balance loss.

Second, since the operation of selecting the top k depends on the magnitude of the routing weight, this routing scheme allows the use of relative routing weights, which helps determine the current module Calculate which tokens are most needed; the routing algorithm can try to ensure that the most critical tokens are among the top k by appropriately setting the weights - this is something that the token selective routing scheme cannot do. In the specific use case, there is a calculation path that is essentially a null operation, so routing important tokens to null should be avoided.

Third, since routing will only go through two paths, a single top-k operation can efficiently divide the token into two mutually exclusive sets (each Compute a set of paths), which can deal with the over- or under-processing problems mentioned above.

For the specific implementation of this routing scheme, please refer to the original paper.
Sampling

Although expert selective routing has many advantages, it also has an obvious Problem: Top-k operations are acausal. That is to say, whether the routing weight of a given token is in the top k depends on the value of the routing weight after it, but we cannot obtain these weights when performing autoregressive sampling.

To solve this problem, the team tested two methods.

The first is to introduce a simple auxiliary loss; practice has proven that its impact on the main goal of language modeling is 0.2%−0.3%, but it can make The model samples autoregressively. They used a binary cross-entropy loss, where the output of the routing algorithm provides the logit, and by selecting the top-k of these logits, the target can be provided (i.e., 1 if a token is in top-k, otherwise 0).

The second method is to introduce a small auxiliary MLP predictor (like another routing algorithm) whose input is the same as the routing algorithm (with stop gradient) , but its output is a prediction result: whether the token is in the top-k of the sequence. This approach does not affect the language modeling goals, and experiments show that it does not significantly affect the speed of this step.

With these new methods, it is possible to perform autoregressive sampling by selecting the token to be routed to, or to bypass a module based on the output of the routing algorithm, without dependencies Information about any future tokens. Experimental results show that this is a relatively simple auxiliary task that can quickly achieve 99% accuracy.
Result

##Training, isoFLOP comparison

First, the team trained some models with a relatively small FLOP budget (6e18) to determine the optimal hyperparameters (see Figure 3 below).
DeepMind upgrades Transformer, forward pass FLOPs can be reduced by up to half
Overall, you can see that the MoD Transformer drags the baseline isoFLOP curve downward and to the right. In other words, the optimal MoD Transformer has lower loss than the optimal baseline model and also has more parameters. This effect has a lucky result: there are some MoD models that perform as well or better than the optimal baseline model (while being faster in steps), even though they themselves are not isoFLOP-optimal under their hyperparameter settings. . For example, a MoD variant with 220M parameters (model No. 3 in Figure 3) is slightly better than the isoFLOP optimal baseline model (also 220M parameters, model No. 1 in Figure 3), but this MoD variant is Steps during training are over 60% faster.

Figure 4 below shows the isoFLOP analysis when the total FLOPs are 6e18, 2e19 and 1e20. As can be seen, the trend continues when the FLOP budget is larger.
DeepMind upgrades Transformer, forward pass FLOPs can be reduced by up to half
# Figure 5 below shows the routing decision of a MoD Transformer trained using the interleaved routing module. Despite the large number of module bypasses, this MoD Transformer still achieves better performance than the regular Transformer.
DeepMind upgrades Transformer, forward pass FLOPs can be reduced by up to half
Autoregressive evaluation

They also evaluated autoregressive sampling of MoD variants Performance, the results are shown in Figure 6 below. These results demonstrate that the computational savings achieved by the MoD Transformer are not limited to the training setting.
DeepMind upgrades Transformer, forward pass FLOPs can be reduced by up to half
Mixed Depth with Expertise (MoDE)

MoD technology works naturally with MoE The models are integrated into so-called MoDE models. Figure 7 below illustrates MoDE and the improvements it brings.
DeepMind upgrades Transformer, forward pass FLOPs can be reduced by up to half
MoDE comes in two variants: staged MoDE and integrated MoDE.

The staged MoDE is to perform routing bypass or reach token operations before the self-attention step; while the integrated MoDE is to integrate between regular MLP experts. "No operation" experts to implement MoD routing. The advantage of the former is that it allows tokens to skip the self-attention step, while the advantage of the latter is that its routing mechanism is simple.

The team noticed that implementing MoDE in an integrated manner is significantly better than designs that directly reduce the capabilities of experts and rely on discarding tokens to achieve residual routing.

The above is the detailed content of DeepMind upgrades Transformer, forward pass FLOPs can be reduced by up to half. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:机器之心. If there is any infringement, please contact admin@php.cn delete
[Ghibli-style images with AI] Introducing how to create free images with ChatGPT and copyright[Ghibli-style images with AI] Introducing how to create free images with ChatGPT and copyrightMay 13, 2025 am 01:57 AM

The latest model GPT-4o released by OpenAI not only can generate text, but also has image generation functions, which has attracted widespread attention. The most eye-catching feature is the generation of "Ghibli-style illustrations". Simply upload the photo to ChatGPT and give simple instructions to generate a dreamy image like a work in Studio Ghibli. This article will explain in detail the actual operation process, the effect experience, as well as the errors and copyright issues that need to be paid attention to. For details of the latest model "o3" released by OpenAI, please click here⬇️ Detailed explanation of OpenAI o3 (ChatGPT o3): Features, pricing system and o4-mini introduction Please click here for the English version of Ghibli-style article⬇️ Create Ji with ChatGPT

Explaining examples of use and implementation of ChatGPT in local governments! Also introduces banned local governmentsExplaining examples of use and implementation of ChatGPT in local governments! Also introduces banned local governmentsMay 13, 2025 am 01:53 AM

As a new communication method, the use and introduction of ChatGPT in local governments is attracting attention. While this trend is progressing in a wide range of areas, some local governments have declined to use ChatGPT. In this article, we will introduce examples of ChatGPT implementation in local governments. We will explore how we are achieving quality and efficiency improvements in local government services through a variety of reform examples, including supporting document creation and dialogue with citizens. Not only local government officials who aim to reduce staff workload and improve convenience for citizens, but also all interested in advanced use cases.

What is the Fukatsu-style prompt in ChatGPT? A thorough explanation with example sentences!What is the Fukatsu-style prompt in ChatGPT? A thorough explanation with example sentences!May 13, 2025 am 01:52 AM

Have you heard of a framework called the "Fukatsu Prompt System"? Language models such as ChatGPT are extremely excellent, but appropriate prompts are essential to maximize their potential. Fukatsu prompts are one of the most popular prompt techniques designed to improve output accuracy. This article explains the principles and characteristics of Fukatsu-style prompts, including specific usage methods and examples. Furthermore, we have introduced other well-known prompt templates and useful techniques for prompt design, so based on these, we will introduce C.

What is ChatGPT Search? Explains the main functions, usage, and fee structure!What is ChatGPT Search? Explains the main functions, usage, and fee structure!May 13, 2025 am 01:51 AM

ChatGPT Search: Get the latest information efficiently with an innovative AI search engine! In this article, we will thoroughly explain the new ChatGPT feature "ChatGPT Search," provided by OpenAI. Let's take a closer look at the features, usage, and how this tool can help you improve your information collection efficiency with reliable answers based on real-time web information and intuitive ease of use. ChatGPT Search provides a conversational interactive search experience that answers user questions in a comfortable, hidden environment that hides advertisements

An easy-to-understand explanation of how to create a composition in ChatGPT and prompts!An easy-to-understand explanation of how to create a composition in ChatGPT and prompts!May 13, 2025 am 01:50 AM

In a modern society with information explosion, it is not easy to create compelling articles. How to use creativity to write articles that attract readers within a limited time and energy requires superb skills and rich experience. At this time, as a revolutionary writing aid, ChatGPT attracted much attention. ChatGPT uses huge data to train language generation models to generate natural, smooth and refined articles. This article will introduce how to effectively use ChatGPT and efficiently create high-quality articles. We will gradually explain the writing process of using ChatGPT, and combine specific cases to elaborate on its advantages and disadvantages, applicable scenarios, and safe use precautions. ChatGPT will be a writer to overcome various obstacles,

How to create diagrams using ChatGPT! Illustrated loading and plugins are also explainedHow to create diagrams using ChatGPT! Illustrated loading and plugins are also explainedMay 13, 2025 am 01:49 AM

An efficient guide to creating charts using AI Visual materials are essential to effectively conveying information, but creating it takes a lot of time and effort. However, the chart creation process is changing dramatically due to the rise of AI technologies such as ChatGPT and DALL-E 3. This article provides detailed explanations on efficient and attractive diagram creation methods using these cutting-edge tools. It covers everything from ideas to completion, and includes a wealth of information useful for creating diagrams, from specific steps, tips, plugins and APIs that can be used, and how to use the image generation AI "DALL-E 3."

An easy-to-understand explanation of ChatGPT Plus' pricing structure and payment methods!An easy-to-understand explanation of ChatGPT Plus' pricing structure and payment methods!May 13, 2025 am 01:48 AM

Unlock ChatGPT Plus: Fees, Payment Methods and Upgrade Guide ChatGPT, a world-renowned generative AI, has been widely used in daily life and business fields. Although ChatGPT is basically free, the paid version of ChatGPT Plus provides a variety of value-added services, such as plug-ins, image recognition, etc., which significantly improves work efficiency. This article will explain in detail the charging standards, payment methods and upgrade processes of ChatGPT Plus. For details of OpenAI's latest image generation technology "GPT-4o image generation" please click: Detailed explanation of GPT-4o image generation: usage methods, prompt word examples, commercial applications and differences from other AIs Table of contents ChatGPT Plus Fees Ch

Explaining how to create a design using ChatGPT! We also introduce examples of use and promptsExplaining how to create a design using ChatGPT! We also introduce examples of use and promptsMay 13, 2025 am 01:47 AM

How to use ChatGPT to streamline your design work and increase creativity This article will explain in detail how to create a design using ChatGPT. We will introduce examples of using ChatGPT in various design fields, such as ideas, text generation, and web design. We will also introduce points that will help you improve the efficiency and quality of a variety of creative work, such as graphic design, illustration, and logo design. Please take a look at how AI can greatly expand your design possibilities. table of contents ChatGPT: A powerful tool for design creation

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),