Home  >  Article  >  Technology peripherals  >  What is the Transformer machine learning model?

What is the Transformer machine learning model?

王林
王林forward
2023-04-08 18:31:061384browse

Translator | Li Rui

Reviewer | Sun Shujuan

In recent years, the Transformer machine learning model has become one of the main highlights of the advancement of deep learning and deep neural network technology. It is mainly used for advanced applications in natural language processing. Google is using it to enhance its search engine results. OpenAI used Transformer to create the famous GPT-2 and GPT-3 models.

What is the Transformer machine learning model?

Since its debut in 2017, the Transformer architecture has continued to evolve and expand into many different variants, extending from language tasks to other domains. They have been used for time series forecasting. They are the key innovation behind AlphaFold, DeepMind’s protein structure prediction model. OpenAI’s source code generation model Codex is also based on Transformer. Transformers have also recently entered the field of computer vision, where they are slowly replacing convolutional neural networks (CNN) in many complex tasks.

Researchers are still exploring ways to improve Transformer and use it in new applications. Here’s a quick explanation of what makes Transformers exciting and how they work.

1. Use neural network to process sequence data

What is the Transformer machine learning model?

Traditional feedforward neural networks are not designed to track sequential data and map each input to an output. It works well for tasks like image classification, but fails on sequence data like text. Machine learning models that process text must not only process each word, but also consider how words are arranged in order and related to each other. And the meaning of a word may change depending on the other words that appear before and after them in the sentence.

Before the advent of Transformer, Recurrent Neural Networks (RNN) were the preferred solution for natural language processing. When given a sequence of words, a Recurrent Neural Network (RNN) will process the first word and feed the results back to the layer that processes the next word. This enables it to track an entire sentence rather than processing each word individually.

The shortcomings of recurrent neural networks (RNN) limit their usefulness. First, they are very slow to process. Because they must process data sequentially, they cannot take advantage of parallel computing hardware and graphics processing units (GPUs) for training and inference. Second, they cannot handle long sequences of text. As the recurrent neural network (RNN) goes deeper into the text excerpt, the effect of the first few words of the sentence gradually diminishes. This problem known as "vanishing gradient" occurs when two linked words are far apart in the text. Third, they only capture the relationship between a word and the words that precede it. In fact, the meaning of words depends on the words that come before and after them.

The Long Short-Term Memory (LSTM) network is the successor of the Recurrent Neural Network (RNN), which can solve the vanishing gradient problem to a certain extent and can handle larger text sequences. But Long Short-Term Memory (LSTM) is even slower to train than Recurrent Neural Networks (RNN), and still cannot take full advantage of parallel computing. They still rely on serial processing of text sequences.

A paper published in 2017 called "Attention is All That Is Needed" introduced Transformer, stating that Transformer
made two key contributions: First, they made parallel processing of entire sequences a possible, thereby scaling the speed and capacity of sequential deep learning models to unprecedented speeds. Second, they introduce "attention mechanisms" that can track relationships between words in very long text sequences, both forward and backward.

What is the Transformer machine learning model?

Before discussing how the Transformer model works, it is necessary to discuss the types of problems that sequence neural networks solve.

  • Vector-to-sequence models take a single input (such as an image) and generate a sequence of data (such as a description).
  • Sequence-to-vector models take sequence data as input, such as product reviews or social media posts, and output a single value, such as a sentiment score.
  • A "sequence-to-sequence" model takes as input a sequence, such as an English sentence, and outputs another sequence, such as the French translation of that sentence.

Despite their differences, all these types of models have one thing in common - they learn expressions. The job of a neural network is to convert one type of data into another type of data. During training, the neural network's hidden layer (the layer between the input and output) adjusts its parameters in a way that best represents the characteristics of the input data type and maps them to the output. The original Transformer was designed as a sequence-to-sequence (seq2seq) model for machine translation (of course, sequence-to-sequence models are not limited to translation tasks). It consists of an encoder module that compresses the input string from the source language into a vector that represents words and their relationships to each other. The decoder module converts the encoded vector into a text string in the target language.

2. Marking and embedding

What is the Transformer machine learning model?

The input text must be processed and converted into a unified format, and then can be input to Transformer. First, the text is passed through a "tokenizer," which breaks it into chunks of characters that can be processed individually. The tokenization algorithm can depend on the application. In most cases, each word and punctuation mark roughly counts as one token. Some suffixes and prefixes count as separate tokens (for example, "ize", "ly", and "pre"). The tokenizer generates a list of numbers representing the token IDs of the input text.

The tokens are then converted into "word embeddings". Word embedding is a vector that attempts to capture the value of a word in a multi-dimensional space. For example, the words "cat" and "dog" may have similar values ​​on some dimensions because they are both used in sentences about animals and pets. However, on other dimensions that distinguish felines from canines, "cat" is closer to "lion" than "wolf." Likewise, "Paris" and "London" are probably closer to each other because they are both cities. However, "London" is closer to "England" and "Paris" is closer to "France" because of the differentiating dimensions of a country. And word embeddings typically have hundreds of dimensions.

Word embeddings are created through embedding models that are trained separately from the Transformer. There are several pre-trained embedding models for language tasks.

3. Pay attention to the layer

What is the Transformer machine learning model?

#Once The sentence is converted into a list of word embeddings, which is fed into the Transformer's encoder module. Unlike Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) models, Transformer does not receive one input at a time. It can receive embedding values ​​for entire sentences and process them in parallel. This makes Transformers more computationally efficient than their predecessors, and also enables them to examine text scenarios in both forward and reverse sequences.


To maintain the order of the words in the sentence, the Transformer applies "positional encoding", which basically means that it modifies the value of each embedding vector to represent its position in the text.

Next, the input is passed to the first encoder block, which processes it through an "attention layer". The attention layer attempts to capture the relationship between words in a sentence. For example, consider the sentence "The big black cat crossed the road after it dropped a bottle on its side." Here, the model must associate "it" with "cat" and "its" with "bottle". Therefore, it should establish other associations, such as "big" and "cat" or "crossed" and "cat". Otherwise, the attention layer receives a list of word embeddings representing individual word values ​​and generates a list of vectors representing individual words and their relationships. The attention layer contains multiple "attention heads", each of which can capture different types of relationships between words.


The output of the attention layer is fed to a feedforward neural network, which converts it into a vector representation and sends it to the next attention layer. Transformers

contains several attention blocks and feed-forward layers to gradually capture more complex relationships.

The decoder module is tasked with converting the encoder’s attention vectors into output data (e.g., a translated version of the input text). During the training phase, the decoder has access to the attention vectors produced by the encoder and the expected results (e.g., translated strings).

The decoder uses the same tokenization, word embedding and attention mechanisms to process the expected results and create attention vectors. It then passes this attention vector and attention layer in the encoder module to establish a relationship between the input and output values. In a translation application, this is the part where words in the source and target language are mapped to each other. Like the encoder module, the decoder attention vectors are passed through feedforward layers. The result is then mapped to a very large pool of vectors, i.e. the size of the target data (in the case of translation, this can involve tens of thousands of words).

4. Training Transformer

##During training, Transformer provides a very large A corpus of paired examples (e.g., English sentences and their corresponding French translations). The encoder module receives and processes the complete input string. However, the decoder receives a masked version of the output string (one word at a time) and attempts to establish a mapping between the encoded attention vector and the expected result. The encoder tries to predict the next word and makes corrections based on the difference between its output and the expected result. This feedback enables the converter to modify the parameters of the encoder and decoder and gradually create the correct mapping between the input and output languages.

The more training data and parameters a Transformer has, the better it is at maintaining coherence and consistency across longer sequences of text.

5. Changes in Transformer

# In the machine translation example studied above, Transformer’s encoder module learns relationships between English words and sentences, while the decoder learns mappings between English and French.

But not all Transformer applications require encoder and decoder modules. For example, the GPT family of large language models uses a stack of decoder modules to generate text. BERT is another variant of the Transformer model developed by Google researchers, but it only uses the encoder module.

The advantage of some of these architectures is that they can be trained through self-supervised learning or unsupervised methods. BERT, for example, does most of its training by taking a large corpus of unlabeled text, masking out parts of it, and trying to predict the missing parts. It then adjusts its parameters based on how close or far away its predictions are from the actual data. By continuously repeating this process, BERT captures the relationship between different words in different scenes. After this pre-training phase, BERT can be fine-tuned for downstream tasks such as question answering, text summarization, or sentiment analysis by training on a small number of labeled examples. Using unsupervised and self-supervised pre-training can reduce the effort required to annotate training data.

There’s a lot more about Transformers and the new apps they’re unlocking, which is beyond the scope of this article. Researchers are still looking for ways to get more help from Transformer.

Transformer also sparked discussions about language understanding and general artificial intelligence. What is clear is that the Transformer, like other neural networks, is a statistical model capable of capturing regularities in data in clever and sophisticated ways. While they don't "understand" language the way humans do, their development is still exciting and has much more to offer.

Original link: https://bdtechtalks.com/2022/05/02/what-is-the-transformer/

What is the Transformer machine learning model?

The above is the detailed content of What is the Transformer machine learning model?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete