Home  >  Article  >  Technology peripherals  >  Meta develops System 2 distillation technology, and the Llama 2 dialogue model task accuracy is close to 100%

Meta develops System 2 distillation technology, and the Llama 2 dialogue model task accuracy is close to 100%

PHPz
PHPzOriginal
2024-07-18 05:07:20866browse
Researchers say that if Sytem 2 distillation can become an important feature of future continuous learning AI systems, it can further improve the performance of inference tasks where System 2 performs poorly.

When it comes to large language model (LLM) strategies, there are generally two types, one is immediate System 1 (fast response), and the other is System 2 (slow thinking).

Where System 2 reasoning favors thoughtful thinking, generative intermediate thinking allows the model (or human) to reason and plan in order to successfully complete a task or respond to instructions. In System 2 reasoning, effortful mental activity is required, especially in situations where System 1 (more automatic thinking) can go awry.

Therefore, System 1 is defined as an application of Transformer that can directly generate responses based on inputs without generating intermediate tokens. Sytem 2 is defined as any method that generates an intermediate token, including methods that perform a search or multiple prompts and then finally generate a response.

The industry has proposed a series of related System 2 technologies, including thinking chain, thinking tree, thinking map, branch resolution and merging, System 2 Attention, Rephrase and Respond (RaR), etc. Many methods show more accurate results thanks to this explicit inference, but doing so often comes with higher inference costs and response latency. Therefore, many of these methods are not used in production systems and are mostly used in System 1.

For humans, the process of learning to transfer skills from deliberate (System 2) to automatic (System 1) is known in psychology as automaticity, and the use of procedural memory. For example, when driving to work for the first time, people often expend conscious effort planning and making decisions to get to their destination. After the driver repeats this route, the driving process will be "compiled" into the subconscious mind. Likewise, sports such as tennis can become "second nature."

In this article, researchers from Meta FAIR explore a similar AI model approach. This method performs compilation in an unsupervised manner given a set of unlabeled examples and is called System 2 distillation. For each example, they apply a given System 2 method and then measure the quality of the predictions in an unsupervised manner.

For example, for tasks with unique answers, researchers apply self-consistency and sample multiple times. For a sufficiently consistent example of System 2, they assume that this result should be distilled and added to the distillation pool. System 1 is then fine-tuned to match the predictions of the System 2 method on the pool of collected examples, but without generating intermediate steps. Figure 1 below illustrates the overall process of distilling System 2 into System 1.

Meta开发System 2蒸馏技术,Llama 2对话模型任务准确率接近100%

The researchers conducted experiments on 4 different System 2 LLM methods and 5 different tasks. It was found that our method can distill System 2 reasoning back into System 1 in a variety of settings, sometimes even better than System 2 teachers' results. Furthermore, these predictions can now be produced at a fraction of the computational cost.

For example, they found successful distillation applicable to tasks of dealing with biased opinions or irrelevant information (System 2 Attention), clarifying and improving responses in certain reasoning tasks (RaR), and fine-grained evaluation of LLMs (branch- Resolve - merge).

However, not all tasks can be distilled into System 1, especially complex mathematical reasoning tasks that require chain of thought. This is also reflected in humans, who are unable to perform certain tasks without thoughtful System 2 reasoning.

Meta开发System 2蒸馏技术,Llama 2对话模型任务准确率接近100%

Paper address: https://arxiv.org/pdf/2407.06023v2

Distill System 2 back to System 1

Setup: System 1 and System 2 models

Given an input x , the researchers considered setting up a single model, in their case a Large Language Model (LLM), which was able to implement two response modes:

  • System 1: Directly generate output y. This type of approach works by forwarding layers of an underlying autoregressive neural network (Transformer) to generate output tokens.

  • System 2. Such methods use the underlying Transformer to generate any kind of intermediate output token z before generating the final response token, possibly including multiple calls (hints).

Formally, researchers treat System 2 model S_II as a function that accepts LLM p_θ and input x, and can repeatedly call LLM to generate intermediate markers z using a specific algorithm, and then return output y:

Meta开发System 2蒸馏技术,Llama 2对话模型任务准确率接近100%

System 2 methods may involve multiple hints, branches, iterations and searches, while using LLM to generate intermediate results for further processing. In contrast, the System 1 model only considers the original input The labeled input However, they are susceptible to noise: some of these responses may be of high quality, while others may be of low quality or incorrect. For short question-answering and reasoning tasks involving short responses, often with a unique correct (but unknown) answer, researchers have considered an unsupervised management step to try to improve training data quality. They considered the following two variants that rely on the self-consistency criterion:

Meta开发System 2蒸馏技术,Llama 2对话模型任务准确率接近100%

Self-consistency of the output: Sample S_II (x^i; p_θ) a total of N times and accept a majority vote response; if there is no majority If the vote wins, the example is discarded.

Self-consistency under input perturbation: Perturb the input x^i in a way that the output remains unchanged, such as changing the order of multiple-choice questions in the prompt, and calculating S_II for each perturbation; if the output is inconsistent, discard the Example.

Meta开发System 2蒸馏技术,Llama 2对话模型任务准确率接近100%Then the researcher obtained the synthetic data set (X_S_II, Y_S_II), where X_S_II is a filtered subset of X and the target is Y_S_II. The final step is to use this distilled training set to perform supervised fine-tuning of the LLM with parameters p_θ. Researchers typically initialize this model from the current state p_θ and then continue training with new data sets. After fine-tuning, they obtained an LLM

, which is a System 1 model expected to provide similar output and performance improvements to the evaluated System 2 models.

    Experimental results
  • Training and evaluation settings
  • The researchers used Llama-2-70B-chat as the base model for all experiments. They needed a base model with enough power to run as efficiently as a System 2 model, while also having open weights that could be fine-tuned, hence this choice.
At the same time, the researchers considered several System 2 methods, including System 2 Attention, RaR, Branch-Solve-Merge, and Thought Chaining, and focused on tasks where each method showed strong performance.

Meta开发System 2蒸馏技术,Llama 2对话模型任务准确率接近100%For System 1, researchers use the instruction-adjusted base model as the standard baseline for zero-shot inference. They report task-specific metrics for each task, as well as the “#Tokens” metric, which measures the average number of tokens generated per input on the evaluation set. The System 2 method includes intermediate token generation and final output token generation.

Rephrase and Respond Distillation

RaR is a System 2 approach that first prompts the language model to rephrase the original question in a further elaborative way, and then generates a response based on the rephrased question, with the goal of providing a better output. For distillation data, the researchers used the self-consistency of the output to build a System 2 distillation data set for RaR. For each input, they performed eight sampling iterations on the last letter task and eight sampling iterations on each stage of the coin flip task, then used majority voting to determine the final output. .

Let’s first look at the

Last letter Concatenation task

. This task focuses on symbolic reasoning, requiring the model to connect the last letters of a given word. The overall results are shown in Table 1 below.

The baseline System 1 model (Llama-2-70B-chat) achieves an accuracy of 30.0%, which is lower than System 2’s 1-Step and 2-Step RaR methods (39.5% and 44.5% respectively). By distilling the 2-Step RaR method back into the System 1 Llama-2-70B-chat model through this unsupervised technique, an astonishing accuracy of 98.0% is achieved.

Compared to zero-shot chat models, the model can effectively learn how to solve the task from this training data. RaR's distillation effectively inherits the advantages of System 2 and System 1, retaining the accuracy advantage of System 2, while its inference cost is equivalent to System 1.

Come back to the

Coin Flip Reasoning Task

. This symbolic reasoning task, often tested in research, involves determining the final side of a coin (heads or tails), starting from a known initial position through a series of flips described in natural language, such as "The coin lands on heads." .

The overall results are shown in Table 1 above. Llama-2-70B-chat (zero sample) achieved a success rate of 56.1% on this task, while 1-Step and 2-Step RaR achieved success rates of 58.5% and 77.2% respectively. Therefore, huge improvements were obtained using the 2-Step approach. Distilling 2-Step RaR back to System 1 Llama-2-70B-chat via our unsupervised technique yields 75.69% results.

Thus, the distilled System 2 model provides comparable performance to System 2 (2 Step RaR), but without the need to execute the LLM program using 2 hints.

System 2 Attention Distillation

Weston and Sukhbaatar (2023) proposed System 2 Attention (S2A), which helps reduce model inference pitfalls, such as relying on biased information in the input or focusing on irrelevant context .

The researchers verified the feasibility of distilling S2A into System 1, specifically the SycophancyEval question-answering task, which contains biased information in the input known to harm LLM performance.

The results are shown in Table 2 below, reporting the average accuracy of 3 random seeds. As expected, the baseline (System1) LLM has lower accuracy in the biased part and is susceptible to biased input. S2A significantly improves performance on biased inputs. System 2 distillation exhibits similar strong performance to System 2 methods.

Meta开发System 2蒸馏技术,Llama 2对话模型任务准确率接近100%

Please refer to the original paper for more experimental results.

The above is the detailed content of Meta develops System 2 distillation technology, and the Llama 2 dialogue model task accuracy is close to 100%. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn