ChatGPT quickly gained the attention of millions of people, but many were wary because they didn’t understand how it worked. And this article is an attempt to break it down so it’s easier to understand.
However, at its core, ChatGPT is a very complex system. If you want to play with ChatGPT or figure out what it is, the core interface is a chat window where you can ask questions or provide queries and the AI will respond. An important detail to remember is that in chat, context is preserved, meaning messages can reference previous information and ChatGPT will be able to understand this contextually.
What happens when a query is entered in the chat box?
Neural Network
First of all, there is a lot to be discovered under the framework of ChatGPT. Machine learning has been developing rapidly over the past 10 years, and ChatGPT utilizes many state-of-the-art technologies to achieve its results.
Neural networks are layers of interconnected "neurons", each neuron is responsible for receiving input, processing the input, and passing it to the network the next neuron in . Neural networks form the backbone of today's artificial intelligence. The input is usually a set of numerical values called "features" that represent some aspect of the data being processed. For example, in the case of language processing, the features might be word embeddings that represent the meaning of each word in a sentence.
Word embeddings are simply a numerical representation of text that a neural network will use to understand the semantics of the text, which can then be used for other purposes, such as responding in a semantically logical way!
So after pressing enter in ChatGPT, the text is first converted into word embeddings, which are trained on text from all over the internet. There is then a neural network that is trained to output a set of appropriate response word embeddings given the input word embeddings. These embeddings are then translated into human-readable words using the inverse operation applied to the input query. This decoded output is what ChatGPT prints.
ChatGPT model size
The computational cost of conversion and output generation is very high. ChatGPT sits on top of GPT-3, a large language model with 175 billion parameters. This means there are 175 billion weights in the extensive neural network that OpenAI tuned using its large dataset.
So each query requires at least two 175 billion calculations, which adds up quickly. OpenAI may have found a way to cache these calculations to reduce computational costs, but it's unknown if this information has been published anywhere. Additionally, GPT-4, expected to be released early this year, is said to have 1,000 times more parameters!
There will be real costs in terms of computational complexity! Don’t be surprised if ChatGPT becomes a paid product soon, as OpenAI currently Millions of dollars are being spent to run it for free.
Encoders, decoders and RNN
A commonly used neural network structure in natural language processing is the encoder-decoder network. These networks are designed to "encode" an input sequence into a compact representation and then "decode" that representation into an output sequence.
Traditionally, encoder-decoder networks have been paired with recurrent neural networks (RNN) for processing sequential data. The encoder processes the input sequence and produces a fixed-length vector representation, which is then passed to the decoder. The decoder processes this vector and produces an output sequence.
Encoder-decoder networks have been widely used in tasks such as machine translation, where the input is a sentence in one language and the output is the translation of that sentence into another language. They are also applied to summarization and image caption generation tasks.
Transformer vs. Attention
Similar to the encoder-decoder structure, the transformer consists of two components; however, the converter is different in that it uses a self-attention mechanism that allows each element of the input to focus on all other elements, allowing it to capture the relationship between elements regardless of their distance from each other.
Transformer also uses multi-head attention, allowing it to focus on multiple parts of the input simultaneously. This enables it to capture complex relationships in input text and produce highly accurate results.
When the "Attention is All You Need" paper was published in 2017, the transformer replaced the encoder-decoder architecture as the state-of-the-art model for natural language processing because it could achieve better performance on longer texts. good performance.
Transformer architecture, from https://arxiv.org/pdf/1706.03762.pdf
Generative pre-training
Generative pre-training is a technique that has been particularly successful in the field of natural language processing. It involves training extensive neural networks on massive data sets in an unsupervised manner to learn a universal representation of the data. This pre-trained network can be fine-tuned for specific tasks, such as language translation or question answering, thereby improving performance.
Generative pre-training architecture, excerpted from "Improving Language Understanding Through Generative Pre-training"
In the example of ChatGPT , which meant fine-tuning the last layer of the GPT-3 model to fit the use case of answering questions in chat, which also leverages human tagging. The following figure can provide a more detailed understanding of ChatGPT fine-tuning:
ChatGPT fine-tuning steps, from https://arxiv.org/pdf/2203.02155.pdf
Bringing it all together
So there are many moving parts under the umbrella of ChatGPT that will only continue to grow. It will be very interesting to see how it continues to develop, as advancements in many different areas will help GPT-like models gain further adoption.
Over the next year or two, we may see significant disruption from this new enabling technology.
The above is the detailed content of AI Encyclopedia: How ChatGPT works. For more information, please follow other related articles on the PHP Chinese website!

The legal tech revolution is gaining momentum, pushing legal professionals to actively embrace AI solutions. Passive resistance is no longer a viable option for those aiming to stay competitive. Why is Technology Adoption Crucial? Legal professional

Many assume interactions with AI are anonymous, a stark contrast to human communication. However, AI actively profiles users during every chat. Every prompt, every word, is analyzed and categorized. Let's explore this critical aspect of the AI revo

A successful artificial intelligence strategy cannot be separated from strong corporate culture support. As Peter Drucker said, business operations depend on people, and so does the success of artificial intelligence. For organizations that actively embrace artificial intelligence, building a corporate culture that adapts to AI is crucial, and it even determines the success or failure of AI strategies. West Monroe recently released a practical guide to building a thriving AI-friendly corporate culture, and here are some key points: 1. Clarify the success model of AI: First of all, we must have a clear vision of how AI can empower business. An ideal AI operation culture can achieve a natural integration of work processes between humans and AI systems. AI is good at certain tasks, while humans are good at creativity and judgment

Meta upgrades AI assistant application, and the era of wearable AI is coming! The app, designed to compete with ChatGPT, offers standard AI features such as text, voice interaction, image generation and web search, but has now added geolocation capabilities for the first time. This means that Meta AI knows where you are and what you are viewing when answering your question. It uses your interests, location, profile and activity information to provide the latest situational information that was not possible before. The app also supports real-time translation, which completely changed the AI experience on Ray-Ban glasses and greatly improved its usefulness. The imposition of tariffs on foreign films is a naked exercise of power over the media and culture. If implemented, this will accelerate toward AI and virtual production

Artificial intelligence is revolutionizing the field of cybercrime, which forces us to learn new defensive skills. Cyber criminals are increasingly using powerful artificial intelligence technologies such as deep forgery and intelligent cyberattacks to fraud and destruction at an unprecedented scale. It is reported that 87% of global businesses have been targeted for AI cybercrime over the past year. So, how can we avoid becoming victims of this wave of smart crimes? Let’s explore how to identify risks and take protective measures at the individual and organizational level. How cybercriminals use artificial intelligence As technology advances, criminals are constantly looking for new ways to attack individuals, businesses and governments. The widespread use of artificial intelligence may be the latest aspect, but its potential harm is unprecedented. In particular, artificial intelligence

The intricate relationship between artificial intelligence (AI) and human intelligence (NI) is best understood as a feedback loop. Humans create AI, training it on data generated by human activity to enhance or replicate human capabilities. This AI

Anthropic's recent statement, highlighting the lack of understanding surrounding cutting-edge AI models, has sparked a heated debate among experts. Is this opacity a genuine technological crisis, or simply a temporary hurdle on the path to more soph

India is a diverse country with a rich tapestry of languages, making seamless communication across regions a persistent challenge. However, Sarvam’s Bulbul-V2 is helping to bridge this gap with its advanced text-to-speech (TTS) t


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Atom editor mac version download
The most popular open source editor

SublimeText3 Linux new version
SublimeText3 Linux latest version

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SublimeText3 English version
Recommended: Win version, supports code prompts!
