search
HomeTechnology peripheralsAIPapers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Despite many notable achievements, practical progress in training deep neural networks (DNNs) has been largely independent of the theoretical basis. Most successful modern DNNs rely on specific arrangements of residual connections and normalization layers, but the general principles of how to use these components in new architectures are still unknown, and their role in existing architectures is still not fully understood. .

Residual architectures are the most popular and successful, originally developed in the context of convolutional neural networks (CNN) and later emerged ubiquitously from attention networks. transformer architecture. One reason for the success of residual architectures is better signal propagation compared to ordinary DNNs, where signal propagation refers to the transmission of geometric information through DNN layers and is represented by a kernel function.

Recently, using signal propagation principles to train deeper DNNs without the involvement of residual connections and/or normalization layers in residual architectures has become an area of ​​community interest. . The reasons are twofold: firstly it validates the signal propagation hypothesis for the effectiveness of residual architectures, thus clarifying the understanding of DNN interpretability; secondly this may enable general principles and methods for DNN trainability beyond the residual paradigm.


For CNNs, the work of Xiao et al. (2018) shows that improved signal propagation through better initialization can efficiently train ordinary deep networks , albeit significantly slower than residual networks. The work of Martens et al. (2021) proposed Deep Kernel Shaping (DKS), which uses activation function transformation to control signal propagation, and uses strong second-order optimizers such as K-FAC to implement the training of ordinary networks and residual networks on ImageNet. The speeds are equal. The work of Zhang et al. (2022) extends DKS to a larger class of activation functions and achieves near equality in generalization.

The key quantity to analyze in signal propagation is the initialization time kernel of the DNN, or more precisely, the approximate kernel under the infinite width limit. For multilayer perceptrons (MLPs) and CNNs using delta initialization, the kernel can be written as a simple layer recursion containing only 2D functions to facilitate straightforward analysis. The kernel evolution of cross-layer transformers is more complex, so existing methods such as DKS are not suitable for transformers or indeed any architecture containing self-attention layers.

In MLP, signal propagation is judged by looking at the behavior of the (one-dimensional) kernel, while signal propagation in the transformer can be judged by looking at the (high-dimensional) kernel matrix at the network layer Judging from the evolution in .

This research must avoid a situation where diagonal elements grow or shrink rapidly with increasing depth, which is related to uncontrolled activation norms and may lead to saturation loss or numerical problems . Avoiding rank collapse is necessary for the trainability of deep transformers, while whether deep residual-free transformers can be trained remains an open question.

This paper from the blind review stage of ICLR 2023 solves this problem and demonstrates for the first time that it is possible to successfully train deep transformers without residual connections or normalization layers. To this end, they study the signal propagation and rank collapse problems in deep residual-free transformers and derive three methods to prevent them. Specifically, the approach uses a combination of parameter initialization, bias matrices, and position-dependent rescaling, and highlights several complexities specific to signal propagation in transformers, including interactions with position encoding and causal masking. The researchers empirically demonstrated that their method can generate deep trainable residual-free transformers.

In the experimental part, on the WikiText-103 and C4 data sets, the researchers demonstrated the use of their main method-Exponential Signal Preserving Attention (E- SPA), can make the training loss of the standard transformer comparable to that of the residual transformer in the paper by extending the training time by about five times. In addition, by combining this method with residual connections, the researchers also showed that transformers without normalization layers can achieve training speeds comparable to standard transformers.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Paper address: https://openreview.net/pdf?id=NPrsUQgMjKK

Regarding this paper, Rohan Anil, chief engineer of Google AI, believes that it is a big step forward for the Transformer architecture and a fundamental improvement.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Constructing a deep Transformer that is trainable without shortcuts

So far, the only strategy to correct Transformer rank collapse relies on residual connections, which This approach skips the inherent trainability issues of the self-attention layer. In contrast, this study directly addresses this question. First better understand signal propagation through attention layers, and then modify based on insights to achieve faithful signal transmission in deep transformers, which can be trained with or without residual connections.

Specifically, first, the study conducted a simple setting of a deep vanilla transformer with only attention, and then they assumed that the transformer has a single head (h = 1) setting or With a multi-head setup, where the attention matrix A does not change between different heads. If the block l≤L has an attention matrix A_l when initialized, the representation of the final block is X_L:

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

For the above formula, if Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture? and Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture? adopt orthogonal initialization, then Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture? can be orthogonal during initialization.

Under the above assumptions, if Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture? is used to represent the cross-position input kernel matrix, after some simplification, the following formula can be obtained:

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

From this simplified formula (kernel matrix in depth-only attention transformer), three requirements for (A_l)_l can be determined:

  1. Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?Must perform well within each block, avoiding degenerate situations such as rank collapse and exploding/vanishing diagonal values;
  2. A_l must be element-wise nonnegative ∀l;
  3. A_l should be lower triangular ∀l to be compatible with causal mask attention.

In the following sections 3.1 and 3.2, the research focuses on finding an attention matrix that meets the above needs, and they propose 3 methods E-SPA, U- SPA and Value-Skipinit, each method is used to control the attention matrix of the transformer, enabling faithful signal propagation even at deep depths. Furthermore, Section 3.3 demonstrates how to modify softmax attention to implement these attention matrices.

In the figure below, the study verified the two proposed SPA schemes, U-SPA and E-SPA. The results show that it can successfully avoid even when the network is deep. Pay attention only to the phenomenon of rank collapse in vanilla transformers.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Experiment

WikiText-103 Baseline: First, this study verifies that standard deep transformers without residual connections are not trainable, even if they have normalization layers (LN) and transformed activations , but the method in this article can solve this problem. As shown in Figure 2, it can be clearly seen that removing the residual connection from the standard transformer makes it untrainable, and the training loss stabilizes at around 7.5. As shown in Figure 1, the standard transformer suffers from rank collapse.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

On the other hand, the E-SPA method proposed in this study outperforms U-SPA and Value-Skipinit. However, the default transformer with residuals and LN still maintains the training speed advantage compared to our residual-free method.

In Table 1, the study evaluates the impact of different activation functions in the MLP block using the proposed method, as well as the use of LN in the residual-free transformer. It can be seen that at depth 36, our method achieves good training performance for a series of activations: DKS-transformed GeLU, TAT-transformed Leaky ReLU and untransformed GeLU, but not untransformed Sigmoid. It has also been seen experimentally that layer normalization is relatively unimportant for training speed and can even be detrimental to transformed activation when using SPA, which already has built-in mechanisms for controlling activation specifications.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

In Figure 3, we see that one way to match the default transformer training loss without requiring more iterations is to use normalization residual connection.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Table 2 shows that E-SPA with normalized residuals and LN outperforms the default PreLN transformer.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Figure 4(a) below shows that E-SPA again outperforms other methods; 4(b) shows that the training loss gap can be improved by simply increasing Training time to eliminate.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

The above is the detailed content of Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Tool Calling in LLMsTool Calling in LLMsApr 14, 2025 am 11:28 AM

Large language models (LLMs) have surged in popularity, with the tool-calling feature dramatically expanding their capabilities beyond simple text generation. Now, LLMs can handle complex automation tasks such as dynamic UI creation and autonomous a

How ADHD Games, Health Tools & AI Chatbots Are Transforming Global HealthHow ADHD Games, Health Tools & AI Chatbots Are Transforming Global HealthApr 14, 2025 am 11:27 AM

Can a video game ease anxiety, build focus, or support a child with ADHD? As healthcare challenges surge globally — especially among youth — innovators are turning to an unlikely tool: video games. Now one of the world’s largest entertainment indus

UN Input On AI: Winners, Losers, And OpportunitiesUN Input On AI: Winners, Losers, And OpportunitiesApr 14, 2025 am 11:25 AM

“History has shown that while technological progress drives economic growth, it does not on its own ensure equitable income distribution or promote inclusive human development,” writes Rebeca Grynspan, Secretary-General of UNCTAD, in the preamble.

Learning Negotiation Skills Via Generative AILearning Negotiation Skills Via Generative AIApr 14, 2025 am 11:23 AM

Easy-peasy, use generative AI as your negotiation tutor and sparring partner. Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining

TED Reveals From OpenAI, Google, Meta Heads To Court, Selfie With MyselfTED Reveals From OpenAI, Google, Meta Heads To Court, Selfie With MyselfApr 14, 2025 am 11:22 AM

The ​TED2025 Conference, held in Vancouver, wrapped its 36th edition yesterday, April 11. It featured 80 speakers from more than 60 countries, including Sam Altman, Eric Schmidt, and Palmer Luckey. TED’s theme, “humanity reimagined,” was tailor made

Joseph Stiglitz Warns Of The Looming Inequality Amid AI Monopoly PowerJoseph Stiglitz Warns Of The Looming Inequality Amid AI Monopoly PowerApr 14, 2025 am 11:21 AM

Joseph Stiglitz is renowned economist and recipient of the Nobel Prize in Economics in 2001. Stiglitz posits that AI can worsen existing inequalities and consolidated power in the hands of a few dominant corporations, ultimately undermining economic

What is Graph Database?What is Graph Database?Apr 14, 2025 am 11:19 AM

Graph Databases: Revolutionizing Data Management Through Relationships As data expands and its characteristics evolve across various fields, graph databases are emerging as transformative solutions for managing interconnected data. Unlike traditional

LLM Routing: Strategies, Techniques, and Python ImplementationLLM Routing: Strategies, Techniques, and Python ImplementationApr 14, 2025 am 11:14 AM

Large Language Model (LLM) Routing: Optimizing Performance Through Intelligent Task Distribution The rapidly evolving landscape of LLMs presents a diverse range of models, each with unique strengths and weaknesses. Some excel at creative content gen

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
1 months agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.