


The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.
Large Language Model (LLM) compression has always attracted much attention. Post-training Quantization is one of the commonly used algorithms. However, most of the existing PTQ methods are integer quantization, and when the number of bits Below 8, the accuracy of the quantized model will drop significantly. Compared with Integer (INT) quantization, Floating Point (FP) quantization can better represent long-tail distributions, so more and more hardware platforms are beginning to support FP quantization. This article gives a solution to FP quantification of large models. Article published at EMNLP 2023.
- ##Paper address: https://arxiv.org/abs/2310.16836
- Code address: https://github.com/nbasyl/LLM-FP4
To understand this article, you must first Have basic knowledge about Floating Point Format and Floating Point Quantization. First, Floating Point Number can be expressed by the following formula:
s represents the sign bit, m represents the mantissa bits, and e represents the exponent bits. p is a value between 0 and 2^e - 1, used to indicate which exponential interval the current number should be divided into, d takes a value of 0 or 1, used to indicate the i-th mantissa bit. b is bias, an integer value used to adjust the exponent interval.
In the following sections, we will explain how floating point quantification works. First, the input values must go through a step called "scale and clip." This step first clips the input value to the maximum range that floating point numbers can represent (±Qmax). The specific calculation formula is as follows:
You can see that similar to integer quantization, FP quantization will also add a full-precision scaling factor to scale the input to an appropriate interval. When calculating matrix multiplication, the scaling factor is calculated separately from the low-bit matrix multiplication, so it does not cause a large overhead. After incorporating this full-precision scaling factor, different quantized tensors can be clipped to different maximum and minimum value intervals accordingly. In actual use, the required quantization interval will be determined based on the value range of the input tensor, and then the corresponding bias will be derived using formula (4). Note that bias in equation (4) can be used as a scaling factor for real values, see equation (2)(3).
The next step in floating-point quantization is to assign the values in the determined quantization interval to the corresponding quantization interval. This process is called comparison and quantization:
#The above figure intuitively illustrates the quantization process. The current input value is quantized into different quantization intervals after being compared with Formula 5.
After obtaining the quantized activation and weight, the scaling factor here is calculated first as mentioned before, and the following efficient matrix multiplication is achieved to complete the acceleration of matrix multiplication:
Then this article points out that the accuracy of FP quantization is closely related to the setting of exponent bits and the quantization interval.
In previous papers, it has been verified that there are huge differences in quantization errors between different FP formats (ie, exponent bit/mantissa bit settings of floating point numbers). Only when the appropriate FP format is chosen, FP quantization can represent long-tail distributions better than INT quantization
#
This article proposes a solution, which is to use a search-based floating-point quantization algorithm to determine the most suitable exponent and mantissa bit settings for floating-point numbers and the corresponding quantization interval in a comprehensive search manner
In addition, in various types of Transformer models (Bert, LLaMA, ViT), there is another phenomenon that seriously affects the difficulty of quantification: that is, different channels in the activation of the model The order of magnitude difference between them is very large, while the order of magnitude between the same channels is very consistent. Previous studies LLM.int8 and SmoothQuant also found similar phenomena, but this article points out that this phenomenon not only exists in LLM, but also found similar activation distributions in other Transformer models (shown below, LLaMA, BERT and DeIT-S) Phenomenon:
As you can see from the figure, those abnormally large channels are much larger than the remaining channels, so in the process of quantifying the activation tensor , the quantization accuracy will be largely determined by these outliers, thereby suppressing the quantization range of other channel values, and ultimately reducing the overall impact on quantization accuracy. This will cause the final result of quantization to collapse, especially when the number of bits drops to a certain level. It is worth noting that only tensor-wise and token-wise quantization can extract the scaling factor during efficient matrix multipilication, while channel-wise quantization does not support efficient matrix multipilication, as shown in the figure below.
In order to simultaneously solve the problem and maintain efficient matrix multiplication, this paper uses a small amount of rectified data sets to pre-compute activations Maximum value for each channel and calculate the scaling factor. The scaling factor is then split into a real number for each tensor multiplied by a power of 2 for each channel. This power of 2 can be represented by the exponential deviation in FP. The entire process can be expressed by the following formula:
Further, after the calibration is completed, the per-channel exponent bias is It no longer changes, so it can be pre-computed together with weight quantization to integrate this per-channel exponent bias into the quantized weights to improve the quantization accuracy. The complete process is as follows:
After the pre-offset, the full-precision offset of each channel in the original activation function can be observed The position becomes a tensor-based real scaling factor, and the decomposed integer offset is moved to the position of the original integer offset in the weight. See Formula 4
for details. This method (pre-shifted exponent bias) can better improve the quantization accuracy while maintaining the principle of efficient matrix multiplication. The intuitive display of the method is as shown in the figure below:
Finally, this article shows the Floating Point Quantization (FPQ) method. On LLaMA, BERT and ViTs models, 4-bit quantization has achieved results far exceeding SOTA. In particular, this article shows that the 4-bit quantized LLaMA-13B model achieves an average score of 63.1 on the zero-shot inference task, which is only 5.8 points lower than the full precision model and has a higher smoothing amount than the previous SOTA method. 12.7, which is currently one of the few known feasible 4-bit quantization schemes.
The above is the detailed content of The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.. For more information, please follow other related articles on the PHP Chinese website!

This article explores the growing concern of "AI agency decay"—the gradual decline in our ability to think and decide independently. This is especially crucial for business leaders navigating the increasingly automated world while retainin

Ever wondered how AI agents like Siri and Alexa work? These intelligent systems are becoming more important in our daily lives. This article introduces the ReAct pattern, a method that enhances AI agents by combining reasoning an

"I think AI tools are changing the learning opportunities for college students. We believe in developing students in core courses, but more and more people also want to get a perspective of computational and statistical thinking," said University of Chicago President Paul Alivisatos in an interview with Deloitte Nitin Mittal at the Davos Forum in January. He believes that people will have to become creators and co-creators of AI, which means that learning and other aspects need to adapt to some major changes. Digital intelligence and critical thinking Professor Alexa Joubin of George Washington University described artificial intelligence as a “heuristic tool” in the humanities and explores how it changes

LangChain is a powerful toolkit for building sophisticated AI applications. Its agent architecture is particularly noteworthy, allowing developers to create intelligent systems capable of independent reasoning, decision-making, and action. This expl

Radial Basis Function Neural Networks (RBFNNs): A Comprehensive Guide Radial Basis Function Neural Networks (RBFNNs) are a powerful type of neural network architecture that leverages radial basis functions for activation. Their unique structure make

Brain-computer interfaces (BCIs) directly link the brain to external devices, translating brain impulses into actions without physical movement. This technology utilizes implanted sensors to capture brain signals, converting them into digital comman

This "Leading with Data" episode features Ines Montani, co-founder and CEO of Explosion AI, and co-developer of spaCy and Prodigy. Ines offers expert insights into the evolution of these tools, Explosion's unique business model, and the tr

This article explores Retrieval Augmented Generation (RAG) systems and how AI agents can enhance their capabilities. Traditional RAG systems, while useful for leveraging custom enterprise data, suffer from limitations such as a lack of real-time dat


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Dreamweaver Mac version
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

WebStorm Mac version
Useful JavaScript development tools