Home >Technology peripherals >AI >Brief analysis: the underlying principles of ChatGPT application
ChatGPT is undoubtedly the most handsome guy on the Internet recently. Through this period of use and the review of some information, Xiao Wang has learned some of the principles behind it and tried to explain the underlying principles of the ChatGPT application. If there are any inaccuracies, please correct me.
Reading this article may answer the following questions for you:
Why do some ChatGPTs charge and some do not?
Why does ChatGPT answer word by word?
Why do the answers to Chinese questions sometimes make people laugh?
Why do you ask it what day it is today and its answer is a time in the past?
Why do you refuse to answer some questions?
"ChatGPT Domestic Version" Operating Principle
With the popularity of ChatGPT, many domestic versions have appeared. This version is free to use The number of times and subsequent charging methods are different. Brother Xiao Wang drew a sketch and tried to help understand.
#[For Method 1]: After registering an account, you can use it scientifically online. There is currently no limit on the number of times. For registration costs, please refer to my previous article.
[For Method 2]: It is understood that there is no need to access the Internet scientifically. The cost of use is to purchase the service of the "domestic version of ChatGPT" operator, so the cost of use is also different.
ChatGPT, how does it work internally?
First, OpenAI launched a new conversational assistant on November 30, 2022. The chatbot is based on the language model (LLM for Large Language Models) GPT-3, or more precisely, on its version 3.5. ChatGPT is actually an adaptation of InstructGPT, which was launched in January 2022 but didn’t make the same impression at the time.
Compared with its predecessors, what is so great about ChatGPT?
Thanks to its ability to automatically generate human-like text, as well as its ability to take conversational context into account while avoiding the shortcomings of its predecessors, such as Tay from Microsoft or Galactica from Meta . Tay became racist and xenophobic within 24 hours. Galactica is creating nonsense and misinformation and can speak out about racism in a very eloquent way. Tay was shut down within 24 hours, Galactica three days later. OpenAI appears to have learned from the mistakes of Microsoft and Meta. In a short period of time, the system has been pushed to unprecedented levels.
What is GPT-3?
The GPT (Generative Pre-trained Transformer) series of models is composed of language models based on Transformer technology. It was developed by San Francisco-based company OpenAI. OpenAI was founded in December 2015 by Elon Musk (the boss of Tesla electric cars) and American businessman Sam Altman, the predecessor of the incubator Y Combinator (Scribd, Reddit, Airbnb, Dropbox, GitLab, Women Who Code, etc.) President. ), and has served as Chairman of the OpenAI Board of Directors since 2020.
In 2020, GPT-3 is the largest language model ever built, with 175 billion parameters. It is so large that 800 GB of memory is required to train it.
LLMs are typically generated from a large number of example texts in different languages and domains. GPT-3 has been trained on hundreds of billions of English words from Common Crawl, WebText2, Books1/2 and Wikipedia (Xiao Wangge thinks this is why we ask questions in Chinese, and it sometimes answers Reasons that make us laugh and cry). It is also trained with programming examples coded in CSS, JSX, Python, and more. It accepts 2048 tokens as input, which allows it to handle very large sentences of about 1,500 words (OpenAI considers a token to be a part of a word of about four characters, and uses the example of 1,000 tokens representing about 750 words).
GPT-3 is classified as a generative model, which means it is primarily trained to predict the next token at the end of an input sentence, i.e. the next word (This is also why it is a appears on the screen word by word). An autocomplete mechanism now found in search engines or Outlook.
GPT-3 has been cited many times for its ability to generate text that is extremely close to the capabilities of a journalist or author. Just give it the beginning of a sentence and it will complete the rest of the paragraph or article word for word. By extension, the model has demonstrated its ability to handle a wide range of language processing tasks, such as translating, answering questions, and filling in missing words in text.
GPT-3.5 is a variant of the GPT-3 model. It has been trained using a mixture of selected text and code until Q4 2021. This explains why ChatGPT is unable to evoke facts after that date. (This explains why when you ask it what day it is today, its answer is a time in the past).
Do we refuse to answer some questions?
If we ask some unethical questions, it will refuse to answer: as follows:
It will politely refuse to answer. Unlike Tay and Galactica, ChatGPT's training is moderated at the source using the moderation API, which allows inappropriate requests to be deferred during training. Nonetheless, false positives and false negatives can still occur and lead to over-moderation. The Moderation API is a classification model performed by the GPT model based on the following categories: Violence, Self-Harm, Hate, Harassment, and Sexuality. To do this, OpenAI uses anonymized data and synthetic data (zero samples), especially when there is not enough data.
Finally
ChatGPT’s ability to simulate real conversations is extraordinary. Even if we know it is a machine, an algorithm, we can only get caught up in the game of asking it so many questions that the machine seems sacred by its outsized knowledge.
But when you look at it carefully, it is still a sentence generator without human-like understanding and self-criticism. I'm even more curious about what will happen next and how successful they will be with this type of architecture.
Reference:
Model Index: https://beta.openai.com/docs/model-index-for-researchers
InstructGPT: https://openai. com/blog/instruction-following/
ChatGPT: https://openai.com/blog/chatgpt/
BLOOM: https://bigscience.huggingface.co/blog/bloom
Y Combinator: https://fr.wikipedia.org/wiki/Y_Combinator
The above is the detailed content of Brief analysis: the underlying principles of ChatGPT application. For more information, please follow other related articles on the PHP Chinese website!