


Built from scratch, DeepMind's new paper explains Transformer in detail with pseudocode
Transformer was born in 2017 and was introduced by Google in the paper "Attention is all you need". This paper abandons the CNN and RNN used in previous deep learning tasks. This groundbreaking research overturned the previous idea of equating sequence modeling and RNN, and is now widely used in NLP. The popular GPT, BERT, etc. are all built on Transformer.
Transformer Since its introduction, researchers have proposed many variations. But everyone's descriptions of Transformer seem to introduce the architecture in verbal form, graphical explanations, etc. There is very little information available for pseudocode descriptions of Transformer.
As expressed in the following passage: A very famous researcher in the field of AI once sent a well-known complexity theorist an article that he thought was very well written. Good paper. And the theorist's answer is: I can't find any theorem in the paper, I don't know what the paper is about.
A paper may be detailed enough for a practitioner, but the precision required by a theorist is usually greater. For some reason, the DL community seems reluctant to provide pseudocode for their neural network models.
Currently it appears that the DL community has the following problems:
DL publications lack scientific accuracy and detail. Deep learning has achieved huge success over the past 5 to 10 years, with thousands of papers published every year. Many researchers only informally describe how they modified previous models, with papers of over 100 pages containing only a few lines of informal model descriptions. At best, some high-level diagrams, no pseudocode, no equations, no mention of a precise interpretation of the model. No one even provides pseudocode for the famous Transformer and its encoder/decoder variants.
Source code and pseudo code. Open source source code is very useful, but compared to the thousands of lines of real source code, well-designed pseudocode is usually less than a page and still essentially complete. It seemed like hard work that no one wanted to do.
Explaining the training process is equally important, but sometimes the paper doesn’t even mention what the inputs and outputs of the model are and what the potential side effects are. Experimental sections in papers often do not explain what is fed into the algorithm and how. If the Methods section has some explanations, it is often disconnected from what is described in the Experimental section, probably because different authors wrote different sections.
Some people may ask: Is pseudocode really needed? What is the use of pseudocode?
Researchers from DeepMind believe that providing pseudocode has many uses. Compared with reading an article or scrolling through 1000 lines of actual code, pseudocode condenses all the important content on one page. , making it easier to develop new variants. To this end, they recently published a paper "Formal Algorithms for Transformers", which describes the Transformer architecture in a complete and mathematically accurate way.
Introduction to the paper
This article covers what Transformer is, how Transformer is trained, what Transformer is used for, the key architectural components of Transformer, and a preview of the more famous models.
##Paper address: https://arxiv.org/pdf/2207.09238.pdf
However, to read this article, readers need to be familiar with basic ML terminology and simple neural network architectures (such as MLPs). For readers, after understanding the content in the article, they will have a solid grasp of Transformer, and may use pseudocode to implement their own Transformer variants.
The main part of this paper is Chapter 3-8, which introduces Transformer and its typical tasks, tokenization, Transformer's architectural composition, Transformer training and inference, and practical applications.
The basically complete pseudocode in the paper is about 50 lines long, while the actual real source code is thousands of lines long. The pseudocode describing the algorithm in the paper is suitable for theoretical researchers who need compact, complete and accurate formulas, experimental researchers who implement Transformer from scratch, and is also useful for extending papers or textbooks using the formal Transformer algorithm.
Pseudocode examples in the paper
For those who are familiar with basic ML terminology and simple neural network architecture For beginners (such as MLP), this paper will help you master a solid Transformer foundation and use pseudocode templates to implement your own Transformer model.
Introduction to the author
The first author of this paper is Mary Phuong, a researcher who officially joined DeepMind in March this year. She graduated with a PhD from the Austrian Institute of Science and Technology, mainly engaged in theoretical research on machine learning.
Another author of the paper is Marcus Hutter, a senior researcher at DeepMind and also an Australian Emeritus Professor, Research Institute of Computer Science (RSCS), National University (ANU).
Marcus Hutter has been engaged in research on the mathematical theory of artificial intelligence for many years. This area of research is based on several mathematical and computational science concepts, including reinforcement learning, probability theory, algorithmic information theory, optimization, search, and computational theory. His book, General Artificial Intelligence: Sequential Decision-Making Based on Algorithmic Probability, was published in 2005 and is a very technical and mathematical book.
In 2002, Marcus Hutter, together with Jürgen Schmidhuber and Shane Legg, proposed the mathematical theory of artificial intelligence AIXI based on idealized agents and reward reinforcement learning. In 2009, Marcus Hutter proposed the feature reinforcement learning theory.
The above is the detailed content of Built from scratch, DeepMind's new paper explains Transformer in detail with pseudocode. For more information, please follow other related articles on the PHP Chinese website!

Exploring the Inner Workings of Language Models with Gemma Scope Understanding the complexities of AI language models is a significant challenge. Google's release of Gemma Scope, a comprehensive toolkit, offers researchers a powerful way to delve in

Unlocking Business Success: A Guide to Becoming a Business Intelligence Analyst Imagine transforming raw data into actionable insights that drive organizational growth. This is the power of a Business Intelligence (BI) Analyst – a crucial role in gu

SQL's ALTER TABLE Statement: Dynamically Adding Columns to Your Database In data management, SQL's adaptability is crucial. Need to adjust your database structure on the fly? The ALTER TABLE statement is your solution. This guide details adding colu

Introduction Imagine a bustling office where two professionals collaborate on a critical project. The business analyst focuses on the company's objectives, identifying areas for improvement, and ensuring strategic alignment with market trends. Simu

Excel data counting and analysis: detailed explanation of COUNT and COUNTA functions Accurate data counting and analysis are critical in Excel, especially when working with large data sets. Excel provides a variety of functions to achieve this, with the COUNT and COUNTA functions being key tools for counting the number of cells under different conditions. Although both functions are used to count cells, their design targets are targeted at different data types. Let's dig into the specific details of COUNT and COUNTA functions, highlight their unique features and differences, and learn how to apply them in data analysis. Overview of key points Understand COUNT and COU

Google Chrome's AI Revolution: A Personalized and Efficient Browsing Experience Artificial Intelligence (AI) is rapidly transforming our daily lives, and Google Chrome is leading the charge in the web browsing arena. This article explores the exciti

Reimagining Impact: The Quadruple Bottom Line For too long, the conversation has been dominated by a narrow view of AI’s impact, primarily focused on the bottom line of profit. However, a more holistic approach recognizes the interconnectedness of bu

Things are moving steadily towards that point. The investment pouring into quantum service providers and startups shows that industry understands its significance. And a growing number of real-world use cases are emerging to demonstrate its value out


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Dreamweaver CS6
Visual web development tools

Atom editor mac version download
The most popular open source editor

Zend Studio 13.0.1
Powerful PHP integrated development environment

SublimeText3 Mac version
God-level code editing software (SublimeText3)

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software