search
HomeTechnology peripheralsAIWhat is the Transformer machine learning model?

What is the Transformer machine learning model?

Apr 08, 2023 pm 06:31 PM
machine learningModelcodex

Translator | Li Rui

Reviewer | Sun Shujuan

In recent years, the Transformer machine learning model has become one of the main highlights of the advancement of deep learning and deep neural network technology. It is mainly used for advanced applications in natural language processing. Google is using it to enhance its search engine results. OpenAI used Transformer to create the famous GPT-2 and GPT-3 models.

What is the Transformer machine learning model?

Since its debut in 2017, the Transformer architecture has continued to evolve and expand into many different variants, extending from language tasks to other domains. They have been used for time series forecasting. They are the key innovation behind AlphaFold, DeepMind’s protein structure prediction model. OpenAI’s source code generation model Codex is also based on Transformer. Transformers have also recently entered the field of computer vision, where they are slowly replacing convolutional neural networks (CNN) in many complex tasks.

Researchers are still exploring ways to improve Transformer and use it in new applications. Here’s a quick explanation of what makes Transformers exciting and how they work.

1. Use neural network to process sequence data

What is the Transformer machine learning model?

Traditional feedforward neural networks are not designed to track sequential data and map each input to an output. It works well for tasks like image classification, but fails on sequence data like text. Machine learning models that process text must not only process each word, but also consider how words are arranged in order and related to each other. And the meaning of a word may change depending on the other words that appear before and after them in the sentence.

Before the advent of Transformer, Recurrent Neural Networks (RNN) were the preferred solution for natural language processing. When given a sequence of words, a Recurrent Neural Network (RNN) will process the first word and feed the results back to the layer that processes the next word. This enables it to track an entire sentence rather than processing each word individually.

The shortcomings of recurrent neural networks (RNN) limit their usefulness. First, they are very slow to process. Because they must process data sequentially, they cannot take advantage of parallel computing hardware and graphics processing units (GPUs) for training and inference. Second, they cannot handle long sequences of text. As the recurrent neural network (RNN) goes deeper into the text excerpt, the effect of the first few words of the sentence gradually diminishes. This problem known as "vanishing gradient" occurs when two linked words are far apart in the text. Third, they only capture the relationship between a word and the words that precede it. In fact, the meaning of words depends on the words that come before and after them.

The Long Short-Term Memory (LSTM) network is the successor of the Recurrent Neural Network (RNN), which can solve the vanishing gradient problem to a certain extent and can handle larger text sequences. But Long Short-Term Memory (LSTM) is even slower to train than Recurrent Neural Networks (RNN), and still cannot take full advantage of parallel computing. They still rely on serial processing of text sequences.

A paper published in 2017 called "Attention is All That Is Needed" introduced Transformer, stating that Transformer
made two key contributions: First, they made parallel processing of entire sequences a possible, thereby scaling the speed and capacity of sequential deep learning models to unprecedented speeds. Second, they introduce "attention mechanisms" that can track relationships between words in very long text sequences, both forward and backward.

What is the Transformer machine learning model?

Before discussing how the Transformer model works, it is necessary to discuss the types of problems that sequence neural networks solve.

  • Vector-to-sequence models take a single input (such as an image) and generate a sequence of data (such as a description).
  • Sequence-to-vector models take sequence data as input, such as product reviews or social media posts, and output a single value, such as a sentiment score.
  • A "sequence-to-sequence" model takes as input a sequence, such as an English sentence, and outputs another sequence, such as the French translation of that sentence.

Despite their differences, all these types of models have one thing in common - they learn expressions. The job of a neural network is to convert one type of data into another type of data. During training, the neural network's hidden layer (the layer between the input and output) adjusts its parameters in a way that best represents the characteristics of the input data type and maps them to the output. The original Transformer was designed as a sequence-to-sequence (seq2seq) model for machine translation (of course, sequence-to-sequence models are not limited to translation tasks). It consists of an encoder module that compresses the input string from the source language into a vector that represents words and their relationships to each other. The decoder module converts the encoded vector into a text string in the target language.

2. Marking and embedding

What is the Transformer machine learning model?

The input text must be processed and converted into a unified format, and then can be input to Transformer. First, the text is passed through a "tokenizer," which breaks it into chunks of characters that can be processed individually. The tokenization algorithm can depend on the application. In most cases, each word and punctuation mark roughly counts as one token. Some suffixes and prefixes count as separate tokens (for example, "ize", "ly", and "pre"). The tokenizer generates a list of numbers representing the token IDs of the input text.

The tokens are then converted into "word embeddings". Word embedding is a vector that attempts to capture the value of a word in a multi-dimensional space. For example, the words "cat" and "dog" may have similar values ​​on some dimensions because they are both used in sentences about animals and pets. However, on other dimensions that distinguish felines from canines, "cat" is closer to "lion" than "wolf." Likewise, "Paris" and "London" are probably closer to each other because they are both cities. However, "London" is closer to "England" and "Paris" is closer to "France" because of the differentiating dimensions of a country. And word embeddings typically have hundreds of dimensions.

Word embeddings are created through embedding models that are trained separately from the Transformer. There are several pre-trained embedding models for language tasks.

3. Pay attention to the layer

What is the Transformer machine learning model?

#Once The sentence is converted into a list of word embeddings, which is fed into the Transformer's encoder module. Unlike Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) models, Transformer does not receive one input at a time. It can receive embedding values ​​for entire sentences and process them in parallel. This makes Transformers more computationally efficient than their predecessors, and also enables them to examine text scenarios in both forward and reverse sequences.


To maintain the order of the words in the sentence, the Transformer applies "positional encoding", which basically means that it modifies the value of each embedding vector to represent its position in the text.

Next, the input is passed to the first encoder block, which processes it through an "attention layer". The attention layer attempts to capture the relationship between words in a sentence. For example, consider the sentence "The big black cat crossed the road after it dropped a bottle on its side." Here, the model must associate "it" with "cat" and "its" with "bottle". Therefore, it should establish other associations, such as "big" and "cat" or "crossed" and "cat". Otherwise, the attention layer receives a list of word embeddings representing individual word values ​​and generates a list of vectors representing individual words and their relationships. The attention layer contains multiple "attention heads", each of which can capture different types of relationships between words.


The output of the attention layer is fed to a feedforward neural network, which converts it into a vector representation and sends it to the next attention layer. Transformers

contains several attention blocks and feed-forward layers to gradually capture more complex relationships.

The decoder module is tasked with converting the encoder’s attention vectors into output data (e.g., a translated version of the input text). During the training phase, the decoder has access to the attention vectors produced by the encoder and the expected results (e.g., translated strings).

The decoder uses the same tokenization, word embedding and attention mechanisms to process the expected results and create attention vectors. It then passes this attention vector and attention layer in the encoder module to establish a relationship between the input and output values. In a translation application, this is the part where words in the source and target language are mapped to each other. Like the encoder module, the decoder attention vectors are passed through feedforward layers. The result is then mapped to a very large pool of vectors, i.e. the size of the target data (in the case of translation, this can involve tens of thousands of words).

4. Training Transformer

##During training, Transformer provides a very large A corpus of paired examples (e.g., English sentences and their corresponding French translations). The encoder module receives and processes the complete input string. However, the decoder receives a masked version of the output string (one word at a time) and attempts to establish a mapping between the encoded attention vector and the expected result. The encoder tries to predict the next word and makes corrections based on the difference between its output and the expected result. This feedback enables the converter to modify the parameters of the encoder and decoder and gradually create the correct mapping between the input and output languages.

The more training data and parameters a Transformer has, the better it is at maintaining coherence and consistency across longer sequences of text.

5. Changes in Transformer

# In the machine translation example studied above, Transformer’s encoder module learns relationships between English words and sentences, while the decoder learns mappings between English and French.

But not all Transformer applications require encoder and decoder modules. For example, the GPT family of large language models uses a stack of decoder modules to generate text. BERT is another variant of the Transformer model developed by Google researchers, but it only uses the encoder module.

The advantage of some of these architectures is that they can be trained through self-supervised learning or unsupervised methods. BERT, for example, does most of its training by taking a large corpus of unlabeled text, masking out parts of it, and trying to predict the missing parts. It then adjusts its parameters based on how close or far away its predictions are from the actual data. By continuously repeating this process, BERT captures the relationship between different words in different scenes. After this pre-training phase, BERT can be fine-tuned for downstream tasks such as question answering, text summarization, or sentiment analysis by training on a small number of labeled examples. Using unsupervised and self-supervised pre-training can reduce the effort required to annotate training data.

There’s a lot more about Transformers and the new apps they’re unlocking, which is beyond the scope of this article. Researchers are still looking for ways to get more help from Transformer.

Transformer also sparked discussions about language understanding and general artificial intelligence. What is clear is that the Transformer, like other neural networks, is a statistical model capable of capturing regularities in data in clever and sophisticated ways. While they don't "understand" language the way humans do, their development is still exciting and has much more to offer.

Original link: https://bdtechtalks.com/2022/05/02/what-is-the-transformer/

What is the Transformer machine learning model?

The above is the detailed content of What is the Transformer machine learning model?. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Let's Dance: Structured Movement To Fine-Tune Our Human Neural NetsLet's Dance: Structured Movement To Fine-Tune Our Human Neural NetsApr 27, 2025 am 11:09 AM

Scientists have extensively studied human and simpler neural networks (like those in C. elegans) to understand their functionality. However, a crucial question arises: how do we adapt our own neural networks to work effectively alongside novel AI s

New Google Leak Reveals Subscription Changes For Gemini AINew Google Leak Reveals Subscription Changes For Gemini AIApr 27, 2025 am 11:08 AM

Google's Gemini Advanced: New Subscription Tiers on the Horizon Currently, accessing Gemini Advanced requires a $19.99/month Google One AI Premium plan. However, an Android Authority report hints at upcoming changes. Code within the latest Google P

How Data Analytics Acceleration Is Solving AI's Hidden BottleneckHow Data Analytics Acceleration Is Solving AI's Hidden BottleneckApr 27, 2025 am 11:07 AM

Despite the hype surrounding advanced AI capabilities, a significant challenge lurks within enterprise AI deployments: data processing bottlenecks. While CEOs celebrate AI advancements, engineers grapple with slow query times, overloaded pipelines, a

MarkItDown MCP Can Convert Any Document into Markdowns!MarkItDown MCP Can Convert Any Document into Markdowns!Apr 27, 2025 am 09:47 AM

Handling documents is no longer just about opening files in your AI projects, it’s about transforming chaos into clarity. Docs such as PDFs, PowerPoints, and Word flood our workflows in every shape and size. Retrieving structured

How to Use Google ADK for Building Agents? - Analytics VidhyaHow to Use Google ADK for Building Agents? - Analytics VidhyaApr 27, 2025 am 09:42 AM

Harness the power of Google's Agent Development Kit (ADK) to create intelligent agents with real-world capabilities! This tutorial guides you through building conversational agents using ADK, supporting various language models like Gemini and GPT. W

Use of SLM over LLM for Effective Problem Solving - Analytics VidhyaUse of SLM over LLM for Effective Problem Solving - Analytics VidhyaApr 27, 2025 am 09:27 AM

summary: Small Language Model (SLM) is designed for efficiency. They are better than the Large Language Model (LLM) in resource-deficient, real-time and privacy-sensitive environments. Best for focus-based tasks, especially where domain specificity, controllability, and interpretability are more important than general knowledge or creativity. SLMs are not a replacement for LLMs, but they are ideal when precision, speed and cost-effectiveness are critical. Technology helps us achieve more with fewer resources. It has always been a promoter, not a driver. From the steam engine era to the Internet bubble era, the power of technology lies in the extent to which it helps us solve problems. Artificial intelligence (AI) and more recently generative AI are no exception

How to Use Google Gemini Models for Computer Vision Tasks? - Analytics VidhyaHow to Use Google Gemini Models for Computer Vision Tasks? - Analytics VidhyaApr 27, 2025 am 09:26 AM

Harness the Power of Google Gemini for Computer Vision: A Comprehensive Guide Google Gemini, a leading AI chatbot, extends its capabilities beyond conversation to encompass powerful computer vision functionalities. This guide details how to utilize

Gemini 2.0 Flash vs o4-mini: Can Google Do Better Than OpenAI?Gemini 2.0 Flash vs o4-mini: Can Google Do Better Than OpenAI?Apr 27, 2025 am 09:20 AM

The AI landscape of 2025 is electrifying with the arrival of Google's Gemini 2.0 Flash and OpenAI's o4-mini. These cutting-edge models, launched weeks apart, boast comparable advanced features and impressive benchmark scores. This in-depth compariso

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool