


Transformer model optimization method for long code sequences to improve performance in long code scenarios
Alibaba Cloud Machine Learning Platform PAI collaborated with the team of Professor Gao Ming of East China Normal University to publish the structure-aware sparse attention Transformer model SASA at SIGIR2022. This is a Transformer model optimization method for long code sequences, dedicated to improving long code scenarios. effect and performance. Since the complexity of the self-attention module increases exponentially with the sequence length, most programming-based Pretrained Language Models (PPLM) use sequence truncation to process code sequences. The SASA method sparses the calculation of self-attention and combines the structural characteristics of the code, thereby improving the performance of long sequence tasks and reducing memory and computational complexity.
Paper: Tingting Liu, Chengyu Wang, Cen Chen, Ming Gao, and Aoying Zhou. Understanding Long Programming Languages with Structure-Aware Sparse Attention. SIGIR 2022
Model Framework
The following figure shows the overall framework of SASA:
Among them, SASA mainly consists of two stages: the preprocessing stage and the Sparse Transformer training stage. In the preprocessing stage, the interaction matrices between two tokens are obtained, one is the top-k frequency matrix, and the other is the AST pattern matrix. The Top-k frequency matrix uses a code pre-trained language model to learn the attention interaction frequency between tokens on the CodeSearchNet corpus. The AST pattern matrix is an Abstract Syntax Tree (AST) that parses the code. It is obtained based on the connection relationship of the syntax tree. Interactive information between tokens. The Sparse Transformer training phase uses Transformer Encoder as the basic framework, replaces full self-attention with structure-aware sparse self-attention, and performs attention calculations between token pairs that conform to specific patterns, thereby reducing computational complexity.
SASA sparse attention includes the following four modules:
- Sliding window attention: only calculates self-attention between tokens in the sliding window, retaining the characteristics of the local context. The computational complexity is, is the sequence length, and is the sliding window size.
- Global attention: Set certain global tokens. These tokens will perform attention calculations with all tokens in the sequence to obtain the global information of the sequence. The calculation complexity is, which is the number of global tokens.
- Top-k sparse attention: The attention interaction in the Transformer model is sparse and long-tailed. For each token, only the top-k tokens with the highest attention interaction are calculated. The complexity is.
- AST-aware structure attention: The code is different from the natural language sequence and has stronger structural characteristics. The code is parsed into an abstract syntax tree (AST), and then the attention calculation is determined based on the connection relationship in the syntax tree. scope.
In order to adapt to the parallel computing characteristics of modern hardware, we divide the sequence into several blocks instead of calculating in token units. Each query block is related to
sliding window blocks and
global blocks and
top-k and AST Blocks calculate attention, and the overall computational complexity is
b is block size.
Each sparse attention pattern corresponds to an attention matrix. Taking sliding window attention as an example, the calculation of the attention matrix is:
ASA pseudo code:
Experimental results
We use four task data sets provided by CodeXGLUE[1] for evaluation, namely code clone detection, defect detection, code search, and code summarization. We extract the data whose sequence length is greater than 512 to form a long sequence data set. The experimental results are as follows:
It can be seen from the experimental results that SASA has the best performance on the three data sets. Performance significantly exceeds all Baselines. Among them, Roberta-base[2], CodeBERT[3], and GraphCodeBERT[4] use truncation to process long sequences, which will lose part of the context information. Longformer[5] and BigBird[6] are methods used to process long sequences in natural language processing, but they do not take into account the structural characteristics of the code, and the direct transfer to the code task is ineffective.
In order to verify the effect of top-k sparse attention and AST-aware sparse attention modules, we conducted ablation experiments on BigCloneBench and Defect Detection data sets. The results are as follows:
The sparse attention module not only improves the performance of long code tasks, but also greatly reduces the use of video memory. Under the same device, SASA can set a larger batch size, while the full self-attention model faces out of memory problem, the specific video memory usage is as follows:
As a sparse attention module, SASA can be migrated to other pre-training models based on Transformer for processing Long sequence natural language processing tasks will be integrated into the open source framework EasyNLP (https://github.com/alibaba/EasyNLP) and contributed to the open source community.
Paper link:
https://arxiv.org/abs/2205.13730
The above is the detailed content of Transformer model optimization method for long code sequences to improve performance in long code scenarios. For more information, please follow other related articles on the PHP Chinese website!

This article explores the growing concern of "AI agency decay"—the gradual decline in our ability to think and decide independently. This is especially crucial for business leaders navigating the increasingly automated world while retainin

Ever wondered how AI agents like Siri and Alexa work? These intelligent systems are becoming more important in our daily lives. This article introduces the ReAct pattern, a method that enhances AI agents by combining reasoning an

"I think AI tools are changing the learning opportunities for college students. We believe in developing students in core courses, but more and more people also want to get a perspective of computational and statistical thinking," said University of Chicago President Paul Alivisatos in an interview with Deloitte Nitin Mittal at the Davos Forum in January. He believes that people will have to become creators and co-creators of AI, which means that learning and other aspects need to adapt to some major changes. Digital intelligence and critical thinking Professor Alexa Joubin of George Washington University described artificial intelligence as a “heuristic tool” in the humanities and explores how it changes

LangChain is a powerful toolkit for building sophisticated AI applications. Its agent architecture is particularly noteworthy, allowing developers to create intelligent systems capable of independent reasoning, decision-making, and action. This expl

Radial Basis Function Neural Networks (RBFNNs): A Comprehensive Guide Radial Basis Function Neural Networks (RBFNNs) are a powerful type of neural network architecture that leverages radial basis functions for activation. Their unique structure make

Brain-computer interfaces (BCIs) directly link the brain to external devices, translating brain impulses into actions without physical movement. This technology utilizes implanted sensors to capture brain signals, converting them into digital comman

This "Leading with Data" episode features Ines Montani, co-founder and CEO of Explosion AI, and co-developer of spaCy and Prodigy. Ines offers expert insights into the evolution of these tools, Explosion's unique business model, and the tr

This article explores Retrieval Augmented Generation (RAG) systems and how AI agents can enhance their capabilities. Traditional RAG systems, while useful for leveraging custom enterprise data, suffer from limitations such as a lack of real-time dat


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment