Home  >  Article  >  Technology peripherals  >  Built from scratch, DeepMind’s new paper explains Transformer in detail with pseudocode

Built from scratch, DeepMind’s new paper explains Transformer in detail with pseudocode

王林
王林forward
2023-04-09 20:31:091393browse

Transformer was born in 2017 and was introduced by Google in the paper "Attention is all you need". This paper abandons the CNN and RNN used in previous deep learning tasks. This groundbreaking research overturned the previous idea of ​​​​equating sequence modeling and RNN, and is now widely used in NLP. The popular GPT, BERT, etc. are all built on Transformer.

Transformer Since its introduction, researchers have proposed many variations. But everyone's descriptions of Transformer seem to introduce the architecture in verbal form, graphical explanations, etc. There is very little information available for pseudocode descriptions of Transformer.

As expressed in the following passage: A very famous researcher in the field of AI once sent a well-known complexity theorist an article that he thought was very well written. Good paper. And the theorist's answer is: I can't find any theorem in the paper, I don't know what the paper is about.

A paper may be detailed enough for a practitioner, but the precision required by a theorist is usually greater. For some reason, the DL community seems reluctant to provide pseudocode for their neural network models.

Currently it appears that the DL community has the following problems:

DL publications lack scientific accuracy and detail. Deep learning has achieved huge success over the past 5 to 10 years, with thousands of papers published every year. Many researchers only informally describe how they modified previous models, with papers of over 100 pages containing only a few lines of informal model descriptions. At best, some high-level diagrams, no pseudocode, no equations, no mention of a precise interpretation of the model. No one even provides pseudocode for the famous Transformer and its encoder/decoder variants.

Source code and pseudo code. Open source source code is very useful, but compared to the thousands of lines of real source code, well-designed pseudocode is usually less than a page and still essentially complete. It seemed like hard work that no one wanted to do.

Explaining the training process is equally important, but sometimes the paper doesn’t even mention what the inputs and outputs of the model are and what the potential side effects are. Experimental sections in papers often do not explain what is fed into the algorithm and how. If the Methods section has some explanations, it is often disconnected from what is described in the Experimental section, probably because different authors wrote different sections.

Some people may ask: Is pseudocode really needed? What is the use of pseudocode?

Researchers from DeepMind believe that providing pseudocode has many uses. Compared with reading an article or scrolling through 1000 lines of actual code, pseudocode condenses all the important content on one page. , making it easier to develop new variants. To this end, they recently published a paper "Formal Algorithms for Transformers", which describes the Transformer architecture in a complete and mathematically accurate way.

Introduction to the paper

This article covers what Transformer is, how Transformer is trained, what Transformer is used for, the key architectural components of Transformer, and a preview of the more famous models.

Built from scratch, DeepMind’s new paper explains Transformer in detail with pseudocode

##Paper address: https://arxiv.org/pdf/2207.09238.pdf

However, to read this article, readers need to be familiar with basic ML terminology and simple neural network architectures (such as MLPs). For readers, after understanding the content in the article, they will have a solid grasp of Transformer, and may use pseudocode to implement their own Transformer variants.

The main part of this paper is Chapter 3-8, which introduces Transformer and its typical tasks, tokenization, Transformer's architectural composition, Transformer training and inference, and practical applications.

Built from scratch, DeepMind’s new paper explains Transformer in detail with pseudocode

The basically complete pseudocode in the paper is about 50 lines long, while the actual real source code is thousands of lines long. The pseudocode describing the algorithm in the paper is suitable for theoretical researchers who need compact, complete and accurate formulas, experimental researchers who implement Transformer from scratch, and is also useful for extending papers or textbooks using the formal Transformer algorithm.

Built from scratch, DeepMind’s new paper explains Transformer in detail with pseudocode

Pseudocode examples in the paper

For those who are familiar with basic ML terminology and simple neural network architecture For beginners (such as MLP), this paper will help you master a solid Transformer foundation and use pseudocode templates to implement your own Transformer model.

Introduction to the author

The first author of this paper is Mary Phuong, a researcher who officially joined DeepMind in March this year. She graduated with a PhD from the Austrian Institute of Science and Technology, mainly engaged in theoretical research on machine learning.

Built from scratch, DeepMind’s new paper explains Transformer in detail with pseudocode

Another author of the paper is Marcus Hutter, a senior researcher at DeepMind and also an Australian Emeritus Professor, Research Institute of Computer Science (RSCS), National University (ANU).

Built from scratch, DeepMind’s new paper explains Transformer in detail with pseudocode

Marcus Hutter has been engaged in research on the mathematical theory of artificial intelligence for many years. This area of ​​research is based on several mathematical and computational science concepts, including reinforcement learning, probability theory, algorithmic information theory, optimization, search, and computational theory. His book, General Artificial Intelligence: Sequential Decision-Making Based on Algorithmic Probability, was published in 2005 and is a very technical and mathematical book.

In 2002, Marcus Hutter, together with Jürgen Schmidhuber and Shane Legg, proposed the mathematical theory of artificial intelligence AIXI based on idealized agents and reward reinforcement learning. In 2009, Marcus Hutter proposed the feature reinforcement learning theory.

The above is the detailed content of Built from scratch, DeepMind’s new paper explains Transformer in detail with pseudocode. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete