


Meta FAIR joins Harvard to provide a new research framework on data biases generated when optimizing large-scale machine learning.
As we all know, the training of large language models often takes months and uses hundreds or even thousands of GPUs. Taking the LLaMA2 70B model as an example, its training requires a total of 1,720,320 GPU hours. Training large models presents unique systemic challenges due to the scale and complexity of these workloads.
Recently, many institutions have reported instability during the training process when training SOTA generative AI models. They usually appear in the form of loss spikes, such as Google’s PaLM model training process There were as many as 20 loss spikes.
Numerical deviation is the root cause of this kind of training inaccuracy. Due to the extremely high execution cost of large language model training, how to quantify numerical deviation has become a key issue.
In a recent work, researchers from Meta and Harvard University developed a principled quantitative method to understand numerical bias in training optimization. To evaluate different state-of-the-art optimization techniques and determine whether they might introduce unexpected instabilities when used to train large models. The researchers found that although existing optimization methods performed well on some tasks, some numerical deviations occurred when applied to large models. This numerical bias may create instability during the training process, resulting in degraded model performance. In order to solve this problem, researchers proposed an optimization based on principled quantitative methods
- paper Title: Is Flash Attention Stable?
- Paper link: https://arxiv.org/pdf/2405.02803
It was found that during a single forward pass, the numerical deviation of Flash Attention An order of magnitude larger than the Baseline Attention of BF16.
Specifically, the method consists of two stages, including:
- Develop a micro-benchmark to perturb a given optimization Numerical accuracy;
- Evaluate how numerical deviations translate into changes in model weights through data-driven analysis based on Wasserstein distance.
The researchers analyzed the SOTA optimization technology Flash Attention and quantified the numerical deviation that may be introduced. Flash Attention is a technology widely used to accelerate attention mechanisms and is often considered a system bottleneck in the Transformer model. While Flash Attention improves speed and reduces memory access, it also relies on algorithm optimization, and algorithm optimization may lead to an increase in numerical deviation.
The researchers hypothesized that adding rescaling factors may introduce unintentional approximations, leading to numerical trade-offs, which may subsequently affect training stability.
They analyzed Flash Attention in the context of multimodal text-to-image workloads to determine the potential importance of numerical deviations between Flash Attention and its baseline. Ultimately, they introduce a framework to quantify the numerical bias of training optimization and its downstream effects.
The researchers have made the following two contributions in quantifying numerical deviations:
(1) Designed a micro-benchmark to separate numerical values Effect of precision on numerical bias.
The micro-benchmark designed by the researchers is a technique for measuring and quantifying the numerical deviations caused by traditional black-box optimization (such as Flash Attention). By perturbing aspects typically unavailable in the provided kernels, they pioneered the discovery that at low numerical precision (BF16), Flash Attention has approximately an order of magnitude higher numerical bias compared to Baseline Attention.
(2) Data-driven analysis based on Wasserstein Distance metric.
This analysis allows researchers to contextualize observed numerical deviations and form an upper bound on their impact on downstream model properties. In the researchers' case study, they were able to limit the impact of the observed numerical bias and found: "Flash Attention introduced model weight bias approximately 1/2 to 1/5 times that of low-precision training."
This study highlights the importance of developing a principled approach: "not only to quantify, but also to contextualize the impact of training optimization on numerical bias." By building proxies to Numerical bias is placed in context and is intended to infer the likelihood of downstream model effects (i.e., training instabilities) that are often difficult to measure.
Experimental Method
The researchers first developed a micro-benchmark to isolate and study the numerical deviation caused by Flash Attention. As shown in Figure 2, they numerically reimplemented Flash Attention to analyze different numerical precisions and apply potential optimization measures at each step of the algorithm.
#Figure 2: Microbenchmark design summary.
This is necessary because the Flash Attention kernel currently only supports FP16 and BF16 numeric formats. This kernel is also a wrapper API call for CUDA code, which makes it challenging to perturb the algorithm to examine the impact of numerical bias.
In contrast, their microbenchmark design allows precision input and modification within the algorithm. The researchers verified the microbenchmark against the original Flash Attention kernel.
They further designed a technique to compare the output of the Attention matrix at each step during model execution. And modified the model code to calculate Baseline Attention and Flash Attention every time attention is called, which allows accurate output matrix comparison for the same input matrix.
To put this into context, we also used the Max difference and Wasserstein Distance metrics to quantify the difference in model weights throughout training, using identical and independent training runs.
For training experiments, the researchers used a generative AI workload (i.e., text-to-image model) that converts text input into images. They retrained the model using the Shutterstock dataset and ran the experiment on a cluster of NVIDIA 80GB A100 GPUs.
Quantifying numerical deviations through micro-benchmarks
The researchers first analyzed the impact of Flash Attention in the forward pass process. They used microbenchmarks to examine the impact of different numerical precisions on the output matrix calculated by Attention, under the condition that the randomly initialized query, key, and value vectors were the same.
As shown in Figure 3, when researchers use different numerical formats ranging from BF16 to FP64, the numerical deviation between Flash Attention and Baseline Attention increases with the number of mantissa digits. And decrease. This suggests that the numerical difference is due to the approximation inherent in having fewer mantissa digits.
#Figure 3: The effect of numerical format on the numerical deviation of Flash Attention.
After that, the researcher set a "golden value" for Baseline Attention in the FP64 numerical format for standard comparison, and then compared the Attention output in different numerical formats with this value (such as shown in Figure 4).
Figure 4: Comparison of Baseline Attention "gold value" under FP64.
The results show that the numerical deviation of Flash Attention is about 10 times that of Baseline under BF16.
To further analyze this observed numerical deviation, the researchers scanned the sequence length of the matrix while keeping the tile size and SRAM size constant (as shown in Figure 5).
#Figure 5: Effect of sequence length on Flash Attention numerical deviation.
As shown in the figure, as the sequence length increases, whether it is measured by (a) the upper limit of the maximum difference, or by (b) the mean and standard deviation of the difference, Flash Attention The numerical deviations between both and Baseline Attention are increasing.
In addition, researchers also use micro-benchmark designs to conduct experiments with different optimizations to better understand the impact of numerical deviations (as shown in Figure 6).
Figure 6a shows how swapping the order of block dimensions results in an increased numerical difference between Flash Attention and Baseline Attention. Other perturbations in Figure 6b, such as limiting the tile size to squares, have no effect on the numerical bias. Figure 6c shows that the larger the block/tile size, the smaller the numerical deviation.
#Figure 6: Algorithm changes and their impact on observed numerical deviations.
Understand the numerical deviation through weight differences
Although Flash Attention may cause numerical deviation in Attention output during the forward pass, this The ultimate goal of the study is to determine if this has any impact during model training to investigate whether it can lead to instability in training.
Therefore, the researchers hope to quantify whether Flash Attention changes the model during training, that is, whether the difference in Attention output observed above is reflected in the updated model weights during training.
The researchers used two indicators to measure the difference in model weights between models trained using Baseline Attention and models trained using Flash Attention. First calculate the maximum difference, that is, find the absolute value of the difference between the weight matrices and take the maximum value, thereby obtaining the upper limit of the deviation, as follows:
Although the maximum difference provides an upper limit on the numerical deviation, it does not take into account the distribution of each matrix. Therefore, researchers quantify weight differences through Wasserstein Distance, which is a common measure of similarity between tensors. Although slightly more computationally complex, Wasserstein Distance includes shape information of the tensor distribution to measure similarity. The calculation formula is summarized as follows:
The lower the value, the higher the similarity between matrices.
Using these two indicators, the researchers then quantified how the model weights of Flash Attention changed compared to Baseline Attention throughout the training process:
According to the two indicators Wasserstein Distance and Max Difference, the addition of Flash Attention does change the model weight during the entire training process, and as the training continues, this This difference only gets bigger, indicating that a model trained with Flash Attention converges to a different model than the same model trained with Baseline Attention.
However, training is a stochastic process, and changes in certain model structures may produce similar results in terms of downstream effects and accuracy. This is noteworthy even if the weights of the models trained with Flash Attention and Baseline Attention are different.
Fully training a model and evaluating accuracy is a costly and resource-intensive task, especially for large models that take months to train.
The researcher configured a proxy to explore:
(a) How significant are these weight changes?
(b) Can this be related to standard weight changes in other widely adopted training optimizations?
In order to achieve this goal, the researchers designed a series of experiments to compare how the weight difference changes during the training process in different scenarios.
In addition to comparing the training process using Flash Attention and Baseline Attention, they also quantified the difference in weights during the same training process where the weights were initialized to different random values at the beginning of training. This provides a bound, as random weight initialization is a common technique and often produces equivalent results.
In addition, the researchers also measured changes in the weights of models trained with different accuracies. Numerical precision (i.e., FP16 vs. FP32) has the potential to cause downstream changes, which serves as an upper bound on the importance of Flash Attention weights.
As shown in Figure 8, it can be found that the model weight bias change rate using Flash Attention is equivalent to or smaller than the weight bias change rate of different model initializations (note the red and blue curves The slope of).
In addition, the weight change rate when using FP16 and FP32 is higher and the change is larger than when different models are initialized.
These results provide a proxy and show that: "While Flash Attention can exhibit numerical bias, it is limited by random model initialization and low-precision training. And the introduced model The weight deviation is approximately 1/2 to 1/5 times that of low-precision training."
For more research details, please refer to the original paper.
The above is the detailed content of Is Flash Attention stable? Meta and Harvard found that their model weight deviations fluctuated by orders of magnitude. For more information, please follow other related articles on the PHP Chinese website!

机器学习是一个不断发展的学科,一直在创造新的想法和技术。本文罗列了2023年机器学习的十大概念和技术。 本文罗列了2023年机器学习的十大概念和技术。2023年机器学习的十大概念和技术是一个教计算机从数据中学习的过程,无需明确的编程。机器学习是一个不断发展的学科,一直在创造新的想法和技术。为了保持领先,数据科学家应该关注其中一些网站,以跟上最新的发展。这将有助于了解机器学习中的技术如何在实践中使用,并为自己的业务或工作领域中的可能应用提供想法。2023年机器学习的十大概念和技术:1. 深度神经网

实现自我完善的过程是“机器学习”。机器学习是人工智能核心,是使计算机具有智能的根本途径;它使计算机能模拟人的学习行为,自动地通过学习来获取知识和技能,不断改善性能,实现自我完善。机器学习主要研究三方面问题:1、学习机理,人类获取知识、技能和抽象概念的天赋能力;2、学习方法,对生物学习机理进行简化的基础上,用计算的方法进行再现;3、学习系统,能够在一定程度上实现机器学习的系统。

本文将详细介绍用来提高机器学习效果的最常见的超参数优化方法。 译者 | 朱先忠审校 | 孙淑娟简介通常,在尝试改进机器学习模型时,人们首先想到的解决方案是添加更多的训练数据。额外的数据通常是有帮助(在某些情况下除外)的,但生成高质量的数据可能非常昂贵。通过使用现有数据获得最佳模型性能,超参数优化可以节省我们的时间和资源。顾名思义,超参数优化是为机器学习模型确定最佳超参数组合以满足优化函数(即,给定研究中的数据集,最大化模型的性能)的过程。换句话说,每个模型都会提供多个有关选项的调整“按钮

截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。 3月23日消息,外媒报道称,分析公司Similarweb的数据显示,在整合了OpenAI的技术后,微软旗下的必应在页面访问量方面实现了更多的增长。截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。这些数据是微软在与谷歌争夺生

荣耀的人工智能助手叫“YOYO”,也即悠悠;YOYO除了能够实现语音操控等基本功能之外,还拥有智慧视觉、智慧识屏、情景智能、智慧搜索等功能,可以在系统设置页面中的智慧助手里进行相关的设置。

人工智能在教育领域的应用主要有个性化学习、虚拟导师、教育机器人和场景式教育。人工智能在教育领域的应用目前还处于早期探索阶段,但是潜力却是巨大的。

阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。 阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。使用 Python 和 C

人工智能在生活中的应用有:1、虚拟个人助理,使用者可通过声控、文字输入的方式,来完成一些日常生活的小事;2、语音评测,利用云计算技术,将自动口语评测服务放在云端,并开放API接口供客户远程使用;3、无人汽车,主要依靠车内的以计算机系统为主的智能驾驶仪来实现无人驾驶的目标;4、天气预测,通过手机GPRS系统,定位到用户所处的位置,在利用算法,对覆盖全国的雷达图进行数据分析并预测。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

WebStorm Mac version
Useful JavaScript development tools

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),
