Home  >  Article  >  Technology peripherals  >  Will ChatGPT really take over the world?

Will ChatGPT really take over the world?

WBOY
WBOYforward
2023-04-12 19:28:011563browse

Will ChatGPT really take over the world?

ChatGPT is a new technology developed by OpenAI that is so good at mimicking human communication that many believe it will soon take over the world — and all the jobs in it.

In a Feb. 8 exchange organized by Brown University’s Carney Institute for Brain Science, two scholars from different research fields discussed the similarities between artificial intelligence and human intelligence.

The discussion on the neuroscience of ChatGPT gave attendees a peek behind the scenes of current machine learning models.

Ellie Pavlick, assistant professor of computer science and Google AI research scientist, said that despite all the buzz around the new technology, the model is not that complex or even new.

At its most basic level, ChatGPT is a machine learning model designed to predict the next word, next expression, etc. in a sentence, she explained.

This type of predictive learning model has been around for decades, Pavlick said. Computer scientists have long been trying to build models that exhibit this behavior and can converse with humans in natural language. To do this, the model needs access to a database of traditional computing components, allowing it to "reason" about overly complex ideas.

What is new is the way ChatGPT is trained or developed. It has access to unfathomable amounts of data—"all the sentences on the Internet," as Pavlick puts it.

“ChatGPT itself is not an inflection point,” Pavlick said. "The inflection point was sometime in the last five years when there was an increase in essentially the same structural models, but they got bigger and bigger. What's happening is that as they get bigger, their performance also It keeps getting better.”

The way ChatGPT and its competitors are freely available to the public is also novel. Even a year ago, Pavlick said, to interact with a system like ChatGPT, a person would need access to a system like Brown's Compute Grid, a dedicated system that only students, faculty and staff could use with specific permissions. tools, and also requires a fair amount of technical proficiency.

But now, anyone, regardless of technical ability, can use ChatGPT’s sleek, streamlined interface.

Does CHATGPT really think like a human?

The result of training a computer system with such a large data set, Pavlick says, is that it seems to recognize general patterns and appears to be able to generate Very realistic articles, stories, poems, dialogues, plays, etc.

It can generate fake news reports and fake scientific findings, and produce all kinds of surprisingly valid results – or “outputs”.

The validity of their results has led many to believe that machine learning models can think like humans. But what about them?

ChatGPT is an artificial neural network, explains Thomas Serre, professor of cognitive, linguistic and psychological sciences and computer science. This means that the hardware and programming are based on a set of interconnected nodes, inspired by the simplification of neurons in the brain.

Serre said there are indeed many fascinating similarities in the way computer brains and human brains learn new information and use it to perform tasks.

"Research is beginning to suggest, at least superficially, that there may be some connection between the types of word and sentence representations that algorithms like ChatGPT use and exploit to process linguistic information and what the brain seems to be doing, "He said.

For example, the backbone of ChatGPT is a state-of-the-art artificial neural network called a Transformer network. These networks, derived from natural language processing research, have recently come to dominate the entire field of artificial intelligence.

Transformer networks have a special mechanism that computer scientists call "self-attention," which is related to the attention mechanism known to occur in the human brain.

Another similarity to the human brain is a key aspect that makes the technology so advanced, Serre said.

In the past, training artificial neural networks on computers to learn and use language or perform image recognition required scientists to perform tedious, time-consuming manual tasks such as building databases and labeling object categories, he explained.

Modern large language models, such as the one used in ChatGPT, can be trained without such explicit human supervision. And this appears to be related to what Serre calls an influential brain theory called predictive coding theory. This assumes that when a person hears someone speak, the brain is constantly making predictions and anticipating what will be said next.

Although the theory was proposed decades ago, Searle said it has not yet been fully tested in neuroscience. However, it is currently driving a large amount of experimental work.

"I would say that at least on these two levels, the attention mechanism of the core engine of this network is constantly predicting what is going to be said, which seems to be related to neuroscience on a very coarse level. idea,” Serre commented.

A recent study linked the strategies used by large language models to actual brain processes, noting: "We still have a lot to learn, but there is a growing body of research in neuroscience showing the role of these large language models. And what visual models do [in computers] is not completely unrelated to what our brains do when we process natural language."

From a darker perspective, it's like the human learning process is susceptible to bias Or the effects of corruption, so do AI models. These systems learn through statistical correlations, Serre said. Whatever information dominates the data set will take over and push out the other information.

“This is an area of ​​great focus for artificial intelligence that is not specific to language,” Serre said. He cited how the over-representation of white men on the internet has biased some facial recognition systems to the point where they fail to recognize faces that don't look white or male.

“Systems are only as good as the training data we feed them, and we know the training data isn’t all that good to begin with,” Serre said.

Data is not unlimited either, he added, especially given the scale of these systems and their voracious appetites.

The latest version of ChatCPT includes reinforcement learning layers that act as guardrails to help prevent harmful or hateful content, Pavlick said. But these are still a work in progress.

"Part of the challenge is... you can't give a model a rule — you can't just say, 'Don't ever generate such and such a thing,'" Pavlick said.

"It learns by example, so you give it a lot of examples of things to do and then say, 'Don't do such a thing. Do such a thing.' So it's always possible to find some little trick for it to do bad things." .”

CHATGPT Doesn’t Dream

A different area of ​​the human brain and neural networks is during sleep—specifically, during dreaming. Although AI-generated text or images may seem surreal, abstract or absurd, Pavlick said there is no evidence to support the notion of functional similarities between biological dreaming processes and the computational processes that generate AI.

She said it's important to understand that applications like ChatGPT are homeostatic systems — in other words, they don't evolve and change in real time online, although they may continue to improve offline.

"It's not like [ChatGPT] is replaying and thinking and trying to combine things in new ways to solidify whatever it knows or what's going on in the brain," Pavlik said.

"It's more like: Done. That's the system. We call it a forward pass through the network - there's no feedback from it. It's not reflecting on what it just did, and it's not updating it way."

When an AI is asked to produce, for example, a rap song about the Krebs cycle, or a psychedelic image of someone's dog, the output may look creative, Pavlick said. But in reality it's just a mashup of tasks the system has been trained to do.

Unlike human language users, each output does not automatically change each subsequent output, or enhance functionality, or work in the way that people think dreams work.

Serre and Pavlick stressed that the caveat in any discussion of human intelligence or artificial intelligence is that scientists still have a lot to learn about both systems.

As for the hype around ChatGPT, specifically the success of neural networks in creating chatbots that are almost more human than humans, Pavlick says it's well-deserved, especially from a technology and engineering perspective.

"This is very exciting!" she said. "We've wanted a system like this for a long time."

The above is the detailed content of Will ChatGPT really take over the world?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete