



- ##Paper address: https://arxiv.org/ pdf/2405.13956
- Paper title: Attention as an RNN
##Specifically, the researcher first Examined the attention mechanism in Transformer, which is the component that causes quadratic growth in the computational complexity of Transformer. This study shows that the attention mechanism can be viewed as a special type of recurrent neural network (RNN), with the ability to efficiently compute many-to-one RNN outputs. Utilizing the RNN formulation of attention, this study demonstrates that popular attention-based models such as Transformer and Perceiver can be considered RNN variants.
However, unlike traditional RNNs such as LSTM and GRU, popular attention models such as Transformer and Perceiver can be considered RNN variants. Unfortunately, they cannot be updated efficiently with new tokens.
To solve this problem, this research introduces a new attention formula based on the parallel prefix scan algorithm , this formula can efficiently calculate the many-to-many RNN output of attention, thereby achieving efficient updates.
Based on this new attention formula, this study proposed Aaren ([A] attention [a] s a [re] current neural [n] etwork), This is a computationally efficient module that can not only be trained in parallel like a Transformer, but can also be updated as efficiently as an RNN.
Experimental results show that Aaren performs comparably to Transformer on 38 data sets covering four common sequence data settings: reinforcement learning, event prediction , time series classification and time series prediction tasks, while being more efficient in terms of time and memory.
In order to solve the above problems, the author proposed an attention-based A highly efficient module that takes advantage of GPU parallelism while updating efficiently.
First, the authors show in Section 3.1 that attention can be viewed as a type of RNN with the special ability to efficiently compute the output of many-to-one RNNs (Figure 1a) . Leveraging the RNN form of attention, the authors further illustrate that popular attention-based models, such as Transformer (Figure 1b) and Perceiver (Figure 1c), can be considered RNNs. However, unlike traditional RNNs, these models cannot efficiently update themselves based on new tokens, limiting their potential in sequential problems where data arrives in the form of a stream.

In order to solve this problem, the author introduces a many-to-many RNN calculation based on the parallel prefix scanning algorithm in Section 3.2 Highly effective ways to focus. On this basis, the author introduced Aaren in Section 3.3 - a computationally efficient module that can not only be trained in parallel (just like Transformer), but can also be efficiently updated with new tokens during inference. Inference only requires a constant Memory (just like traditional RNN).


##The numerator is








. To this end, the authors utilize the parallel prefix scan algorithm (see Algorithm 1), a parallel computing method that computes N prefixes from N consecutive data points via the correlation operator ⊕.This algorithm can efficiently calculate
,
is for efficiency To calculate
, you can calculate
and
through a parallel scan algorithm, and then combine a_k and c_k to calculate
.
,
. The input to the parallel scan algorithm is
. The algorithm recursively applies the operator ⊕ and works as follows:
, where ,
.
. Also known as
. Combining the last two values of the output tuple,
is retrieved resulting in an efficient parallel method of computing attention as a many-to-many RNN (Figure 3).
##Different from Transformer, the query in Transformer is input to attention One of the tokens, and in Aaren, the query token q is learned through backpropagation during the training process.
The following figure shows an example of a stacked Aaren model. The input context token of the model is x_1:3 and the output is y_1:3. It is worth noting that since Aaren utilizes the attention mechanism in the form of RNN, stacking Aarens is also equivalent to stacking RNN. Therefore, Aarens is also able to efficiently update with new tokens, i.e. the iterative computation of y_k only requires constant computation since it only depends on h_k-1 and x_k.
Transformer-based models require linear memory (when using KV cache) and need to store all Previous tokens, including those in the intermediate Transformer layer, but Aarens-based models only require constant memory and do not need to store all previous tokens, which makes Aarens significantly better than Transformer in computational efficiency.
The goal of the experimental part is to compare the performance and performance of Aaren and Transformer Performance in terms of resources (time and memory) required. For a comprehensive comparison, the authors performed evaluations on four problems: reinforcement learning, event prediction, time series prediction, and time series classification.
The author first compared Aaren and Transformer in reinforcement learning Performance. Reinforcement learning is popular in interactive environments such as robotics, recommendation engines, and traffic control.
The results in Table 1 show that Aaren performs comparably with Transformer across all 12 datasets and 4 environments. However, unlike Transformer, Aaren is also an RNN and therefore can efficiently handle new environmental interactions in continuous computation, making it more suitable for reinforcement learning.
Next, The authors compare the performance of Aaren and Transformer in event prediction. Event prediction is popular in many real-world settings, such as finance (e.g., transactions), healthcare (e.g., patient observation), and e-commerce (e.g., purchases).
#The results in Table 2 show that Aaren performs comparably to Transformer on all datasets.Aaren's ability to efficiently process new inputs is particularly useful in event prediction environments, where events occur in irregular streams.
The above is the detailed content of New work by Bengio et al.: Attention can be regarded as RNN. The new model is comparable to Transformer, but is super memory-saving.. For more information, please follow other related articles on the PHP Chinese website!

Cyberattacks are evolving. Gone are the days of generic phishing emails. The future of cybercrime is hyper-personalized, leveraging readily available online data and AI to craft highly targeted attacks. Imagine a scammer who knows your job, your f

In his inaugural address to the College of Cardinals, Chicago-born Robert Francis Prevost, the newly elected Pope Leo XIV, discussed the influence of his namesake, Pope Leo XIII, whose papacy (1878-1903) coincided with the dawn of the automobile and

This tutorial demonstrates how to integrate your Large Language Model (LLM) with external tools using the Model Context Protocol (MCP) and FastAPI. We'll build a simple web application using FastAPI and convert it into an MCP server, enabling your L

Explore Dia-1.6B: A groundbreaking text-to-speech model developed by two undergraduates with zero funding! This 1.6 billion parameter model generates remarkably realistic speech, including nonverbal cues like laughter and sneezes. This article guide

I wholeheartedly agree. My success is inextricably linked to the guidance of my mentors. Their insights, particularly regarding business management, formed the bedrock of my beliefs and practices. This experience underscores my commitment to mentor

AI Enhanced Mining Equipment The mining operation environment is harsh and dangerous. Artificial intelligence systems help improve overall efficiency and security by removing humans from the most dangerous environments and enhancing human capabilities. Artificial intelligence is increasingly used to power autonomous trucks, drills and loaders used in mining operations. These AI-powered vehicles can operate accurately in hazardous environments, thereby increasing safety and productivity. Some companies have developed autonomous mining vehicles for large-scale mining operations. Equipment operating in challenging environments requires ongoing maintenance. However, maintenance can keep critical devices offline and consume resources. More precise maintenance means increased uptime for expensive and necessary equipment and significant cost savings. AI-driven

Marc Benioff, Salesforce CEO, predicts a monumental workplace revolution driven by AI agents, a transformation already underway within Salesforce and its client base. He envisions a shift from traditional markets to a vastly larger market focused on

The Rise of AI in HR: Navigating a Workforce with Robot Colleagues The integration of AI into human resources (HR) is no longer a futuristic concept; it's rapidly becoming the new reality. This shift impacts both HR professionals and employees, dem


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

SublimeText3 Linux new version
SublimeText3 Linux latest version

WebStorm Mac version
Useful JavaScript development tools
