search
HomeTechnology peripheralsAIWith a theoretical foundation, we can carry out in-depth optimization.

Why does transformer perform so well? Where does the In-Context Learning capability it brings to many large language models come from? In the field of artificial intelligence, transformer has become the dominant model in deep learning, but the theoretical basis for its excellent performance has been insufficiently studied.

Recently, new research from researchers at Google AI, ETH Zurich, and Google DeepMind has attempted to reveal the answer to the mystery. In new research, they reverse-engineered the transformer and found some optimization methods. Paper "Uncovering mesa-optimization algorithms in Transformers": With a theoretical foundation, we can carry out in-depth optimization.
                  Paper link: https://arxiv.org/abs/2309.05858

Author Show that minimizing the general autoregressive loss results in an auxiliary gradient-based optimization algorithm operating in the forward pass of the Transformer. This phenomenon has recently been called "mesa-optimization." Furthermore, the researchers found that the resulting mesa optimization algorithm exhibited contextual small-shot learning capabilities, independent of model size. The new results therefore complement the principles of small-shot learning that have emerged previously in large language models.

The researchers believe that the success of Transformers is based on its architectural bias in implementing the mesa optimization algorithm in the forward pass: (i) defining internal learning goals, and (ii) It is optimized.

With a theoretical foundation, we can carry out in-depth optimization.

##                   Figure 1: Illustration of the new hypothesis: optimizing the weight θ of the autoregressive Transformer fθ will produce the mesa optimization algorithm implemented in the forward propagation of the model . As input sequence s_1, . . . , s_t is processed to time step t, Transformer (i) creates an internal training set consisting of input-target association pairs, (ii) defines an internal objective function through the result dataset, which is used to measure the performance of the internal model using weights W, (iii) Optimize this objective and use the learned model to generate future predictions. With a theoretical foundation, we can carry out in-depth optimization.

Contributions of this study include:

  • Summary von Oswald et al., and show how Transformers can theoretically predict the next element of a sequence from regression by optimizing an internally constructed objective using gradient-based methods.
  • Experimentally reverse engineered Transformers trained on a simple sequence modeling task and found strong evidence that their forward pass implements a two-step algorithm: (i ) Early self-attention layers build an internal training dataset by grouping and copying labels, thus implicitly building an internal training dataset. Define internal objective functions and (ii) optimize these objectives at a deeper level to generate predictions.
  • Similar to LLM, experiments show that simple autoregressive training models can also become contextual learners, and on-the-fly adjustments are crucial to improve LLM's contextual learning and can also improve performance in specific environments. Performance.
  • Inspired by the discovery that the attention layer attempts to implicitly optimize the internal objective function, the author introduces the mesa layer, which is a new type of attention layer that can effectively solve the least squares optimization problem rather than just taking a single gradient step to achieve optimality. Experiments demonstrate that a single mesa layer outperforms deep linear and softmax self-attention Transformers on simple sequential tasks while providing more interpretability.

With a theoretical foundation, we can carry out in-depth optimization.

  • After preliminary language modeling experiments, it was found that replacing the standard self-attention layer with the mesa layer obtained effective The promising results prove that this layer has strong contextual learning capabilities.

#Based on recent work showing that transformers explicitly trained to solve small-shot tasks in context can implement gradient descent (GD) algorithms. Here, the authors show that these results generalize to autoregressive sequence modeling—a typical approach to training LLMs.

First analyze transformers trained on simple linear dynamics, where each sequence is generated by a different W* - to prevent cross-sequence memorization. In this simple setup, the authors demonstrate a transformer that creates a mesa dataset and then uses preprocessed GD to optimize the mesa objective.

With a theoretical foundation, we can carry out in-depth optimization.

This study trains a deep transformer on a token structure that aggregates adjacent sequence elements. Interestingly, this simple preprocessing results in an extremely sparse weight matrix (less than 1% of the weights are non-zero), resulting in a reverse-engineered algorithm.

With a theoretical foundation, we can carry out in-depth optimization.

For single-layer linear self-attention, the weight corresponds to one GD step. For deep transformers, interpretability becomes difficult. This study relies on linear probing and examines whether latent activations predict autoregressive targets or preprocessed inputs.

Interestingly, the predictability of both detection methods gradually improves with increasing network depth. This finding suggests that preprocessed GD is hidden in the model.

With a theoretical foundation, we can carry out in-depth optimization.

                                                                                                                                                                                                                       .

The study found that the training layer can be perfectly fitted when all degrees of freedom are used in the construction, including not only the learned learning rate eta, Also included is a set of learned initial weights W_0. Importantly, as shown in Figure 2, the learned one-step algorithm still performs far better than a single mesa layer.

We can notice that under simple weight settings, it is easy to find through basic optimization that this layer can optimally solve the tasks studied here. This result demonstrates the advantage of hard-coded inductive biases in favor of mesa optimization.

#With theoretical insights into the multi-layer case, first analyze deep linear and softmax focusing only on Transformer. The authors format the input according to a 4-channel structure, With a theoretical foundation, we can carry out in-depth optimization., which corresponds to the choice of W_0 = 0.

As with the single-layer model, the authors see clear structure in the weights of the trained model. As a first reverse engineering analysis, this study exploits this structure and builds an algorithm (RevAlg-d, where d represents the number of layers) containing 16 parameters per layer header (instead of 3200). The authors found that this compressed but complex expression can describe the trained model. In particular, it allows interpolation between actual Transformer and RevAlg-d weights in an almost lossless manner.

While the RevAlg-d expression explains a trained multi-layer Transformer with a small number of free parameters, it is difficult to interpret it as a mesa optimization algorithm. Therefore, the authors employed linear regression probing analysis (Alain & Bengio, 2017; Akyürek et al., 2023) to find the characteristics of the hypothesized mesa optimization algorithm.

On the deep linear self-attention Transformer shown in Figure 3, we can see that both probes can be linearly decoded, and the decoding performance varies with the sequence length and network Increases with increasing depth. Therefore, base optimization discovers a hybrid algorithm that descends layer by layer on the original mesa-objective Lt (W) while improving the condition number of the mesa optimization problem. This results in a rapid decrease in mesa-objective Lt (W). It can also be seen that performance improves significantly with increasing depth.

It can therefore be considered that the rapid decline of autoregressive mesa-objective Lt (W) is achieved by stepwise (cross-layer) mesa optimization on better preprocessed data .

With a theoretical foundation, we can carry out in-depth optimization. 图 3: Multi -layer transformer training for reverse engineering built inputs built.

This shows that if the transformer is trained on the built tokens, it will predict with mesa optimization. Interestingly, when sequence elements are given directly, the transformer will construct the token by itself by grouping the elements, which the research team calls "creating the mesa dataset".

With a theoretical foundation, we can carry out in-depth optimization.in conclusion

This study shows that the Transformer model is capable of developing gradient-based inference algorithms when trained on a sequence prediction task under a standard autoregressive objective. Therefore, state-of-the-art results obtained in multi-task, meta-learning settings can also be transferred to traditional self-supervised LLM training settings.

Additionally, the study found that learned autoregressive inference algorithms can be repurposed to solve supervised contextual learning tasks without retraining, thereby Interpret results within a single unified framework.

With a theoretical foundation, we can carry out in-depth optimization.

So, what does this have to do with in-context learning? The study believes that after training the transformer on the autoregressive sequence task, it achieves appropriate mesa optimization and therefore can perform few-shot context learning without any fine-tuning.

With a theoretical foundation, we can carry out in-depth optimization.

This study assumes that mesa optimization also exists for LLM, thereby improving its contextual learning capabilities. Interestingly, the study also observed that effectively adapting prompts for LLM can also lead to substantial improvements in contextual learning capabilities.

With a theoretical foundation, we can carry out in-depth optimization.

With a theoretical foundation, we can carry out in-depth optimization.

Interested readers can read the original text of the paper to learn more about the research content.

Reference content:
https://www.reddit. com/r/MachineLearning/comments/16jc2su/r_uncovering_mesaoptimization_algorithms_in/
https://twitter.com/oswaldjoh/status/1701873029100241241

The above is the detailed content of With a theoretical foundation, we can carry out in-depth optimization.. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:机器之心. If there is any infringement, please contact admin@php.cn delete
五个时间序列预测的深度学习模型对比总结五个时间序列预测的深度学习模型对比总结May 05, 2023 pm 05:16 PM

MakridakisM-Competitions系列(分别称为M4和M5)分别在2018年和2020年举办(M6也在今年举办了)。对于那些不了解的人来说,m系列得比赛可以被认为是时间序列生态系统的一种现有状态的总结,为当前得预测的理论和实践提供了经验和客观的证据。2018年M4的结果表明,纯粹的“ML”方法在很大程度上胜过传统的统计方法,这在当时是出乎意料的。在两年后的M5[1]中,最的高分是仅具有“ML”方法。并且所有前50名基本上都是基于ML的(大部分是树型模型)。这场比赛看到了LightG

RLHF与AlphaGo核心技术强强联合,UW/Meta让文本生成能力再上新台阶RLHF与AlphaGo核心技术强强联合,UW/Meta让文本生成能力再上新台阶Oct 27, 2023 pm 03:13 PM

在一项最新的研究中,来自UW和Meta的研究者提出了一种新的解码算法,将AlphaGo采用的蒙特卡洛树搜索算法(Monte-CarloTreeSearch,MCTS)应用到经过近端策略优化(ProximalPolicyOptimization,PPO)训练的RLHF语言模型上,大幅提高了模型生成文本的质量。PPO-MCTS算法通过探索与评估若干条候选序列,搜索到更优的解码策略。通过PPO-MCTS生成的文本能更好满足任务要求。论文链接:https://arxiv.org/pdf/2309.150

MIT团队运用机器学习闭环自主分子发现平台,成功发现、合成和描述了303种新分子MIT团队运用机器学习闭环自主分子发现平台,成功发现、合成和描述了303种新分子Jan 04, 2024 pm 05:38 PM

编辑|X传统意义上,发现所需特性的分子过程一直是由手动实验、化学家的直觉以及对机制和第一原理的理解推动的。随着化学家越来越多地使用自动化设备和预测合成算法,自主研究设备越来越接近实现。近日,来自MIT的研究人员开发了由集成机器学习工具驱动的闭环自主分子发现平台,以加速具有所需特性的分子的设计。无需手动实验即可探索化学空间并利用已知的化学结构。在两个案例研究中,该平台尝试了3000多个反应,其中1000多个产生了预测的反应产物,提出、合成并表征了303种未报道的染料样分子。该研究以《Autonom

Code Llama代码能力飙升,微调版HumanEval得分超越GPT-4,一天发布Code Llama代码能力飙升,微调版HumanEval得分超越GPT-4,一天发布Aug 26, 2023 pm 09:01 PM

昨天,Meta开源专攻代码生成的基础模型CodeLlama,可免费用于研究以及商用目的。CodeLlama系列模型有三个参数版本,参数量分别为7B、13B和34B。并且支持多种编程语言,包括Python、C++、Java、PHP、Typescript(Javascript)、C#和Bash。Meta提供的CodeLlama版本包括:代码Llama,基础代码模型;代码羊-Python,Python微调版本;代码Llama-Instruct,自然语言指令微调版就其效果来说,CodeLlama的不同版

AI助力脑机接口研究,纽约大学突破性神经语音解码技术,登Nature子刊AI助力脑机接口研究,纽约大学突破性神经语音解码技术,登Nature子刊Apr 17, 2024 am 08:40 AM

作者|陈旭鹏编辑|ScienceAI由于神经系统的缺陷导致的失语会导致严重的生活障碍,它可能会限制人们的职业和社交生活。近年来,深度学习和脑机接口(BCI)技术的飞速发展为开发能够帮助失语者沟通的神经语音假肢提供了可行性。然而,神经信号的语音解码面临挑战。近日,约旦大学VideoLab和FlinkerLab的研究者开发了一个新型的可微分语音合成器,可以利用一个轻型的卷积神经网络将语音编码为一系列可解释的语音参数(例如音高、响度、共振峰频率等),并通过可微分神经网络将这些参数合成为语音。这个合成器

准确率 >98%,基于电子密度的 GPT 用于化学研究,登 Nature 子刊准确率 >98%,基于电子密度的 GPT 用于化学研究,登 Nature 子刊Mar 27, 2024 pm 02:16 PM

编辑|紫罗可合成分子的化学空间是非常广阔的。有效地探索这个领域需要依赖计算筛选技术,比如深度学习,以便快速地发现各种有趣的化合物。将分子结构转换为数字表示形式,并开发相应算法生成新的分子结构是进行化学发现的关键。最近,英国格拉斯哥大学的研究团队提出了一种基于电子密度训练的机器学习模型,用于生成主客体binders。这种模型能够以简化分子线性输入规范(SMILES)格式读取数据,准确率高达98%,从而实现对分子在二维空间的全面描述。通过变分自编码器生成主客体系统的电子密度和静电势的三维表示,然后通

手机摄影技术让以假乱真的好莱坞级电影特效视频走红手机摄影技术让以假乱真的好莱坞级电影特效视频走红Sep 07, 2023 am 09:41 AM

一个普通人用一台手机就能制作电影特效的时代已经来了。最近,一个名叫Simulon的3D技术公司发布了一系列特效视频,视频中的3D机器人与环境无缝融合,而且光影效果非常自然。呈现这些效果的APP也叫Simulon,它能让使用者通过手机摄像头的实时拍摄,直接渲染出CGI(计算机生成图像)特效,就跟打开美颜相机拍摄一样。在具体操作中,你要先上传一个3D模型(比如图中的机器人)。Simulon会将这个模型放置到你拍摄的现实世界中,并使用准确的照明、阴影和反射效果来渲染它们。整个过程不需要相机解算、HDR

谷歌用大型模型训练机器狗理解模糊指令,激动不已准备去野餐谷歌用大型模型训练机器狗理解模糊指令,激动不已准备去野餐Jan 16, 2024 am 11:24 AM

人类和四足机器人之间简单有效的交互是创造能干的智能助理机器人的途径,其昭示着这样一个未来:技术以超乎我们想象的方式改善我们的生活。对于这样的人类-机器人交互系统,关键是让四足机器人有能力响应自然语言指令。近来大型语言模型(LLM)发展迅速,已经展现出了执行高层规划的潜力。然而,对LLM来说,理解低层指令依然很难,比如关节角度目标或电机扭矩,尤其是对于本身就不稳定、必需高频控制信号的足式机器人。因此,大多数现有工作都会假设已为LLM提供了决定机器人行为的高层API,而这就从根本上限制了系统的表现能

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software