Home >Technology peripherals >AI >An easy and objective way to introduce large models to avoid over-interpretation

An easy and objective way to introduce large models to avoid over-interpretation

王林
王林forward
2023-05-12 18:13:061037browse

1. Preface

This article aims to provide readers without computer science background with some information about ChatGPT and its similar artificial intelligence systems (such as GPT-3, GPT-4, Bing Chat, Bard, etc. ) principle of how it works. ChatGPT is a chatbot built on a large language model for conversational interaction. These terms can be obscure, so I'll explain them. At the same time, we will discuss the core concepts behind them, and this article does not require the reader to have any technical or mathematical background knowledge. We will make heavy use of metaphors to explain related concepts in order to understand them better. We will also discuss the implications of these techniques and what we should or should not expect to be able to do with large language models like ChatGPT.

Next, we will start with the basic "What is artificial intelligence" in a way that does not use professional terms as much as possible, and gradually discuss in depth the terms and concepts related to large language models and ChatGPT, and will use metaphors to explain them. At the same time, we'll also talk about what these technologies mean and what we should or shouldn't expect them to be able to do.

2. What is Artificial Intelligence

First, let’s start with some basic terms that you may hear often. So what is artificial intelligence?

Artificial intelligence: refers to an entity that can exhibit behaviors similar to what humans would consider intelligent. There are some problems with using "intelligence" to define artificial intelligence, because "intelligence" itself does not have a clear definition. However, this definition is still appropriate. It basically means that if we see something man-made that performs interesting, useful, and seemingly difficult behaviors, then we might say that they are intelligent. For example, in computer games, we often refer to computer-controlled characters as “AI”. Most of these roles are simple programs based on if-then-else code (e.g., "If the player is in range, fire, otherwise move to the nearest stone and hide"). But if the characters can keep us engaged and entertained while not doing anything patently stupid, then we might think they're more complex than they actually are.

Once we understand how something works, we may not think it is magical, but expect something more complex behind the scenes. It all depends on how well we know what's going on behind the scenes.

The important point is that artificial intelligence is not magic. Because it's not magic, it can be explained.

3. What is machine learning

Another term often associated with artificial intelligence is machine learning.

Machine learning: A method of creating behavior by collecting data, forming a model, and then executing the model. Sometimes it's difficult to manually create a bunch of if-then-else statements to capture some complex phenomenon (like language). In this case, we try to find large amounts of data and model it using algorithms that can find patterns in the data.

So what is a model? A model is a simplified version of a complex phenomenon. For example, a car model is a smaller, simpler version of a real car that shares many of the properties of the real car, but is of course not meant to completely replace the original version. Model cars may look realistic and are useful when experimenting.

An easy and objective way to introduce large models to avoid over-interpretation

Just like we can build a smaller, simpler car, we can also build a smaller, simpler model of human language. We use the term "large language model" because these models are very large from the perspective of the amount of memory (video memory) they need to use. The largest models currently in production, such as ChatGPT, GPT-3, and GPT-4, are so large that they require supercomputers running on data center servers to create and run.

4. What is a neural network

There are many ways to learn a model through data, and neural networks are one of them. This technology is loosely based on the structure of the human brain, which consists of a series of interconnected neurons that pass electrical signals between them, allowing us to complete a variety of tasks. The basic concept of neural networks was invented in the 1940s, and the basic concept of how to train neural networks was invented in the 1980s. At that time, neural networks were very inefficient. It was not until computer hardware upgrades around 2017 that we could They can be used on a large scale.

However, I personally prefer to use the metaphor of a circuit to simulate a neural network. Through resistance, the flow of current through wires, we can simulate the working of neural networks.

Imagine we want to make a self-driving car that can drive on the highway. We installed distance sensors on the front, back and sides of the car. The distance sensor reports a value of 1 when an object is approaching, and a value of 0 when there is no detectable object nearby.

We also installed robots to operate the steering wheel, brake and accelerate. When the throttle receives a value of 1, it uses maximum acceleration, while a value of 0 means no acceleration. Likewise, a value of 1 sent to the braking mechanism means emergency braking, while 0 means no braking. The steering mechanism accepts a value between -1 and 1, with negative numbers turning left, positive numbers turning right, and 0 meaning staying straight.

Of course we must record driving data. When the path ahead is clear, you speed up. When there's a car in front of you, you slow down. When a car comes too close from the left, you swerve to the right and change lanes, assuming of course there is no car on the right. This process is very complex and requires different operations (turn left or right, accelerate or decelerate, brake) based on different combinations of sensor information, so each sensor needs to be connected to each robot mechanism.

An easy and objective way to introduce large models to avoid over-interpretation

What happens when you drive on the road? Electrical current flows from all sensors to all robot actuators, and the vehicle turns left, right, accelerates and brakes simultaneously. It will create a mess.

An easy and objective way to introduce large models to avoid over-interpretation

Get out the resistors and start placing them in different parts of the circuit so that current can flow more freely between certain sensors and certain robotic arms. For example, we would like the current to flow more freely from the front proximity sensor to the brakes rather than the steering mechanism. We also installed elements called gates that would either stop current from flowing until enough charge had accumulated to trigger the switch (only allowing current to flow when both the front and rear proximity sensors reported a high number), or only allow the current to flow when the input power Send power forward when the intensity is low (send more power to the accelerator when the forward proximity sensor reports a low value).

But where should we place these resistors and gates? I don't know either. Place them randomly in various locations. Then try again. Maybe the car drives better this time, which means it sometimes brakes and steers when the data says it's best to brake and steer, etc., but it doesn't get it right every time. And there are some things it does worse (it accelerates when the data suggests it sometimes needs to brake). So we kept randomly trying different combinations of resistors and gates. Eventually, we'll stumble upon a combination that's good enough, and we'll declare success. For example, this combination:

An easy and objective way to introduce large models to avoid over-interpretation

(Actually, we won’t add or remove doors, but we will modify them so that they can be activated from below with lower energy, Either more energy needs to come out from below, or a lot of energy is released only when there is very little energy below. Machine learning purists may feel uncomfortable with this description. Technically, this is done by tuning This is done with a bias on the gate, this is not usually shown in such diagrams, but from a circuit metaphor perspective it can be thought of as a cable that plugs in directly to the power supply and can act like all the other wires Modify it like a cable.)

An easy and objective way to introduce large models to avoid over-interpretation

#It is not good to try randomly. An algorithm called backpropagation makes a pretty good guess at changing circuit configurations. The details of the algorithm don't matter, just know that it fine-tunes the circuit to make it behave closer to what the data suggests, and after thousands of fine-tunings, you can eventually get results that match the data.

We call resistors and gates parameters because they are actually everywhere, and what the backpropagation algorithm does is declares each resistor to be stronger or weaker. Therefore, if we know the layout and parameter values ​​of the circuit, the entire circuit can be replicated on other cars.

The above is the detailed content of An easy and objective way to introduce large models to avoid over-interpretation. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete