Home  >  Article  >  Technology peripherals  >  You can guess movies based on emoticons. Where does ChatGPT’s “emergence” ability come from?

You can guess movies based on emoticons. Where does ChatGPT’s “emergence” ability come from?

WBOY
WBOYforward
2023-04-04 12:00:041270browse

Now that large language models such as ChatGPT are powerful enough, they have begun to exhibit surprising and unpredictable behavior.

Before we formally introduce this article, let us first ask a question: What movie does the emoticon in the picture below describe?

You can guess movies based on emoticons. Where does ChatGPT’s “emergence” ability come from?

You may not even be able to guess it. The movie represented by these four symbols is "Finding Nemo". This prompt task was to evaluate the Large Language Model (LLM) 204 last year. One of the tasks. For the simplest LLM model, the answer given is somewhat random, and it thinks that this movie tells the story of a man; for the relatively complex medium-sized model, the answer given is "The Emoji Movie". At this time The answer is very close. But the most complex model guessed it right, giving the answer "Finding Nemo."

Google computer scientist Ethan Dyer said: "This behavior of the model is surprising. What is even more surprising is that these models only use instructions: that is, they accept a string of text as input and then predict what will happen next. What happens, and keep repeating this process based entirely on statistical data." Some scholars began to expect that increasing the size of the models would improve the performance of solving known tasks, but they did not expect that these models would suddenly be able to handle so many new and unpredictable tasks. Task.

A recent survey conducted by Ethan Dyer shows that LLM can create hundreds of "emergent" capabilities, that is, the ability of large models to complete certain tasks that small models cannot complete. Clearly, the ability to scale the model increases, from simple multiplication to generating executable computer code to decoding movies based on emojis. New analysis shows that for certain tasks and certain models, there is a complexity threshold above which the capabilities of the model skyrocket. However, the researchers also pointed out the negative impact of model scaling: as complexity increases, some models exhibit new biases and inaccuracies in their responses.

"In all the literature that I'm aware of, there's never been a discussion of language models doing these things," says Rishi Bommasani, a computer scientist at Stanford University who helped compile a document last year that included dozens of models. A list of emergent behaviors, including several identified in Ethan Dyer's project. Today, the list continues to grow.

Today, researchers are racing not only to determine the emergent capabilities of large models, but also to figure out why and how they occur—essentially trying to predict the unpredictability. Understanding its emergent nature can reveal answers to deep questions related to artificial intelligence and machine learning, such as whether complex models are actually doing something new or are simply becoming very good at statistics. Additionally, it can help researchers exploit potential benefits and reduce emerging risks.

Emergence

Biologists, physicists, ecologists, and other scientists use the term emergence to describe the self-organizing collective that occurs when a large group of things act as a unit. sexual behavior. The combination of inanimate atoms creates living cells; water molecules create waves; the spectacular natural spectacle of flocks of starlings flying across the sky in ever-changing but recognizable formations; cells make muscles move and hearts beat. Crucially, emergent capabilities occur in systems involving many independent parts. But researchers have only recently been able to document this emergent power in LLMs because the models have only just grown to sufficiently large scales.

Language models have been around for decades. Until about five years ago, the most powerful models were based on recurrent neural networks. These models essentially take a string of text and predict what the next word will be. What makes a model loop is that it learns from its own output: its predictions are fed back into the network to improve future performance.

In 2017, researchers at Google Brain introduced a new architecture called Transformer. While the recurrent network analyzes the sentence word by word, the Transformer processes all words simultaneously. This means that Transformer can process large amounts of text in parallel.

"It's possible that the model learned something fundamentally new and different that it didn't learn on smaller models," says Ellie Pavlick of Brown University.

Transformers can quickly scale up the complexity of a language model by increasing the number of parameters in the model, among other factors. These parameters can be thought of as connections between words, and by shuffling the text during training, transformers can tune these connections to improve the model. The more parameters in a model, the more accurately it can make connections and the closer it comes to mimicking human speech. As expected, a 2020 analysis by OpenAI researchers found that models improve in accuracy and power as they scale.

But the advent of large-scale language models has also brought many truly unexpected things. With the advent of models like GPT-3, which has 175 billion parameters, or Google PaLM, which scales to 540 billion parameters, users are starting to describe more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT to admit it was a Linux terminal and have it run some simple math code to calculate the first 10 prime numbers. Notably, it completes the task much faster than running the same code on a real Linux device.

As with the task of describing movies through emojis, the researchers had no reason to think that language models built for predicting text would be persuaded to be used to mimic computer terminals. Many of these emergent behaviors demonstrate zero-shot or few-shot learning, and they describe the ability of LLMs to solve problems that have never (or rarely) been encountered before. This has been a long-term goal of artificial intelligence research, Ganguli said. It also showed that GPT-3 could solve problems in a zero-shot setting without any explicit training data, Ganguli said. "It made me quit what I was doing and get more involved in this research."

He is not alone in this field of research. The first clues that LLMs can transcend the limitations of their training data have been discovered by a host of researchers, who are working to better understand what emergence looks like and how it occurs. And the first step is to document it thoroughly and comprehensively.

Ethan Dyer helps explore what unexpected capabilities large language models have, and what they bring to the table. -Gabrielle Lurie What remains an open question. Therefore, they asked the research community to provide examples of difficult and diverse tasks to document the outer limits of what tracking LLMs can do. The effort, known as the BIG-bench (Beyond the Imitation Game Benchmark) project, borrowing its name from Alan Turing's imitation game, was designed to test whether computers could answer questions in a convincingly human way. (This became known as the Turing test.) The research group was particularly interested in examples of LLMs suddenly acquiring new and unprecedented capabilities.
As one would expect, in some tasks, model performance improves more consistently and predictably as complexity increases. On other tasks, expanding the number of parameters did not produce any improvement in model performance. And for about 5 percent of the tasks, the researchers found what they called a breakthrough — a rapid, dramatic jump in performance within a certain threshold. However, this threshold will vary depending on the task and model.

For example, a model with relatively few parameters (just a few million) may not be able to successfully complete a three-digit addition or two-digit multiplication problem, but with tens of billions of parameters, some models Computational accuracy will soar. Similar performance jumps were seen on a number of other tasks, including decoding the International Phonetic Alphabet, deciphering the letters of a word, identifying offensive content in passages in Hinglish (a combination of Hindi and English), and generating texts related to Swahili. Similar English equivalents of proverbs.

However, researchers quickly realized that a model’s complexity was not the only driver of its performance. If the data quality is high enough, some unexpected capabilities can be induced from smaller models with fewer parameters or trained on smaller data sets. Additionally, the way a query is worded can affect the accuracy of the model's response. For example, when Dyer and colleagues used a multiple-choice format for a movie emoji task, accuracy didn't improve in a sudden jump but gradually improved as model complexity increased. Last year, in a paper presented at NeurIPS, the top academic conference in the field, researchers at Google Brain showed how a model with prompts could explain itself (an ability known as chain-of-thought reasoning). Correctly solve a math word problem that the same model without the prompt would not be able to solve.

Until you study the impact of model size, you won’t know what capabilities it may have and what its flaws may be.

Yi Tay, a systematic research scientist at Google Brain, pointed out that recent research shows that the thinking chain prompt changes the expansion curve, thereby changing the node where the model emerges. In their NeurIPS paper, Google researchers show that using thought chain prompts can elicit emergent behavior not identified in the BIG-bench study. Such prompts, which require models to explain their reasoning, may help researchers begin to investigate why emergence occurs.

These recent findings suggest at least two possibilities for why emergence occurs, said Ellie Pavlick, a computer scientist at Brown University who studies computational models of language. The first possibility is that larger models do acquire new capabilities spontaneously, as comparisons with biological systems suggest. It could very well be that the model learned something completely new and different that it didn't have on the smaller scale model, which is what we all hope is the case, that something fundamental happens when the model is scaled up Variety.

Ellie Pavlick also pointed out that another relatively normal and objective possibility is that what appears to be emergent may instead be the culmination of an internal statistically driven process that operates through chain-of-thought reasoning. Large LLMs may simply be learning heuristics that are incomprehensible to smaller models with fewer parameters or lower quality data.

But Pavlick believes that because we don’t know how the underlying working mechanism of the model is, we can’t tell what is going on.

Unpredictable capabilities and flaws

But large models also have flaws. For example, Bard, the artificial intelligence chat robot launched by Google some time ago, answered questions related to the James Webb Space Telescope. Make factual errors.

Emergence leads to unpredictability, and unpredictability—which seems to increase as the size of the model increases—is difficult for researchers to control.

“It’s hard to know in advance how these models will be used or deployed,” Ganguli said. “To study emergent phenomena, you have to consider a situation where you won’t know what capabilities it may have and what its flaws may be until you study the effects of model size.”

Published in June last year In an LLM analysis, Anthropic researchers examined whether these models exhibit certain types of racial or social biases, unlike previous algorithms that were not based on LLM and were used to predict which ex-offenders were likely to reoffend. Those reported differ. The research was inspired by an apparent paradox directly related to emergence: As models improve performance as they scale up, they may also increase the likelihood of unpredictable phenomena, including those that may lead to bias or cause harm.

“Certain harmful behaviors will pop up in certain models,” Ganguli said. He points to a recent analysis of LLM – also known as the BBQ benchmark – which showed that social bias emerges across a wide range of parameters. "Larger models suddenly become more biased," he said, a risk that could jeopardize the use of these models if not addressed.

But he also made a counterpoint: When researchers simply tell a model not to rely on stereotypes or social biases—literally, by feeding in these instructions—the model improves its predictions and responses. The bias is smaller. This suggests that some emergent properties may also be used to reduce bias. In a paper published in February, the Anthropic team reported a new mode of moral self-correction in which users prompt programs to be helpful, honest, and harmless.

Ganguli said emergence reveals both the amazing potential of large language models and their unpredictable risks. Applications of these LLMs have proliferated, so a better understanding of this duality will help exploit the diversity of language model capabilities.

Ganguli said: "We are studying how users actually use these systems, but they are also constantly tinkering and improving these systems. We spend a lot of time just chatting with our models and using them. It worked better. And that's actually when we started trusting these models."

The above is the detailed content of You can guess movies based on emoticons. Where does ChatGPT’s “emergence” ability come from?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete