Home  >  Article  >  Technology peripherals  >  AI Encyclopedia: How ChatGPT works

AI Encyclopedia: How ChatGPT works

王林
王林forward
2023-04-12 13:31:034464browse

AI Encyclopedia: How ChatGPT works

ChatGPT quickly gained the attention of millions of people, but many were wary because they didn’t understand how it worked. And this article is an attempt to break it down so it’s easier to understand.

However, at its core, ChatGPT is a very complex system. If you want to play with ChatGPT or figure out what it is, the core interface is a chat window where you can ask questions or provide queries and the AI ​​will respond. An important detail to remember is that in chat, context is preserved, meaning messages can reference previous information and ChatGPT will be able to understand this contextually.

What happens when a query is entered in the chat box?

Neural Network

First of all, there is a lot to be discovered under the framework of ChatGPT. Machine learning has been developing rapidly over the past 10 years, and ChatGPT utilizes many state-of-the-art technologies to achieve its results.

AI Encyclopedia: How ChatGPT works

Neural networks are layers of interconnected "neurons", each neuron is responsible for receiving input, processing the input, and passing it to the network the next neuron in . Neural networks form the backbone of today's artificial intelligence. The input is usually a set of numerical values ​​called "features" that represent some aspect of the data being processed. For example, in the case of language processing, the features might be word embeddings that represent the meaning of each word in a sentence.

Word embeddings are simply a numerical representation of text that a neural network will use to understand the semantics of the text, which can then be used for other purposes, such as responding in a semantically logical way!

So after pressing enter in ChatGPT, the text is first converted into word embeddings, which are trained on text from all over the internet. There is then a neural network that is trained to output a set of appropriate response word embeddings given the input word embeddings. These embeddings are then translated into human-readable words using the inverse operation applied to the input query. This decoded output is what ChatGPT prints.

ChatGPT model size

The computational cost of conversion and output generation is very high. ChatGPT sits on top of GPT-3, a large language model with 175 billion parameters. This means there are 175 billion weights in the extensive neural network that OpenAI tuned using its large dataset.

So each query requires at least two 175 billion calculations, which adds up quickly. OpenAI may have found a way to cache these calculations to reduce computational costs, but it's unknown if this information has been published anywhere. Additionally, GPT-4, expected to be released early this year, is said to have 1,000 times more parameters!

There will be real costs in terms of computational complexity! Don’t be surprised if ChatGPT becomes a paid product soon, as OpenAI currently Millions of dollars are being spent to run it for free.

Encoders, decoders and RNN

A commonly used neural network structure in natural language processing is the encoder-decoder network. These networks are designed to "encode" an input sequence into a compact representation and then "decode" that representation into an output sequence.

Traditionally, encoder-decoder networks have been paired with recurrent neural networks (RNN) for processing sequential data. The encoder processes the input sequence and produces a fixed-length vector representation, which is then passed to the decoder. The decoder processes this vector and produces an output sequence.

Encoder-decoder networks have been widely used in tasks such as machine translation, where the input is a sentence in one language and the output is the translation of that sentence into another language. They are also applied to summarization and image caption generation tasks.

AI Encyclopedia: How ChatGPT works

Transformer vs. Attention

Similar to the encoder-decoder structure, the transformer consists of two components; however, the converter is different in that it uses a self-attention mechanism that allows each element of the input to focus on all other elements, allowing it to capture the relationship between elements regardless of their distance from each other.

Transformer also uses multi-head attention, allowing it to focus on multiple parts of the input simultaneously. This enables it to capture complex relationships in input text and produce highly accurate results.

When the "Attention is All You Need" paper was published in 2017, the transformer replaced the encoder-decoder architecture as the state-of-the-art model for natural language processing because it could achieve better performance on longer texts. good performance.

AI Encyclopedia: How ChatGPT works

Transformer architecture, from https://arxiv.org/pdf/1706.03762.pdf

Generative pre-training

Generative pre-training is a technique that has been particularly successful in the field of natural language processing. It involves training extensive neural networks on massive data sets in an unsupervised manner to learn a universal representation of the data. This pre-trained network can be fine-tuned for specific tasks, such as language translation or question answering, thereby improving performance.

AI Encyclopedia: How ChatGPT works

Generative pre-training architecture, excerpted from "Improving Language Understanding Through Generative Pre-training"

In the example of ChatGPT , which meant fine-tuning the last layer of the GPT-3 model to fit the use case of answering questions in chat, which also leverages human tagging. The following figure can provide a more detailed understanding of ChatGPT fine-tuning:

AI Encyclopedia: How ChatGPT works

ChatGPT fine-tuning steps, from https://arxiv.org/pdf/2203.02155.pdf

Bringing it all together

So there are many moving parts under the umbrella of ChatGPT that will only continue to grow. It will be very interesting to see how it continues to develop, as advancements in many different areas will help GPT-like models gain further adoption.

Over the next year or two, we may see significant disruption from this new enabling technology.

The above is the detailed content of AI Encyclopedia: How ChatGPT works. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete