search
HomeTechnology peripheralsAIMicrosoft's 6-page paper explodes: ternary LLM, so delicious!

This is the conclusion put forward by Microsoft and the University of Chinese Academy of Sciences in the latest study-

All LLMs will be 1.58 bit.

Microsofts 6-page paper explodes: ternary LLM, so delicious!

Specifically, the method proposed in this study is called BitNet b1.58, which can be said to be "rooted in" from the large language model. "Start with the parameters on.

The traditional storage in the form of 16-bit floating point numbers (such as FP16 or BF16) has been changed into ternary , that is, {- 1, 0, 1}.

Microsofts 6-page paper explodes: ternary LLM, so delicious!

It should be noted that "1.58 bit" does not mean that each parameter occupies 1.58 bytes of storage space, but that each parameter can use 1.58 bits of information. coding.

After such conversion, the calculation in the matrix will only involve the addition of integers, thus allowing large models to significantly reduce the storage space required while maintaining a certain accuracy. and computing resources.

For example, BitNet b1.58 is compared with Llama when the model size is 3B. While the speed is increased by 2.71 times, the GPU memory usage is almost only a quarter of the original.

And when the size of the model is larger (for example, 70B) , the speed improvement and memory saving will be more significant!

This subversive idea of ​​​​tradition really makes netizens shine. The paper also received high attention on X:

Microsofts 6-page paper explodes: ternary LLM, so delicious!

Netizens were amazed While "changing the rules of the game", it also played up the old joke of Google's attention paper:

1 bit is all YOU need.

Microsofts 6-page paper explodes: ternary LLM, so delicious!

So how is BitNet b1.58 implemented? Let's continue reading.

Change the parameters into ternary

This research is actually an optimization done by the original team based on a previously published paper, that is, adding additional data to the original BitNet An extra value of 0 is added.

Microsofts 6-page paper explodes: ternary LLM, so delicious!

Overall, BitNet b1.58 is still based on the BitNet architecture (a Transformer) , replacing nn.Linear with BitLinear.

As for the detailed optimization, the first thing is the "adding a 0" we just mentioned, that is, weight quantization(weight quantization).

The weights of the BitNet b1.58 model are quantized into ternary values ​​{-1, 0, 1}, which is equivalent to using 1.58 bits to represent each weight in the binary system. This quantification method reduces the memory footprint of the model and simplifies the calculation process.

Microsofts 6-page paper explodes: ternary LLM, so delicious!

Secondly, in terms of quantitative function design, in order to limit the weight to -1, 0 or 1, the researchers adopted a A quantification function called absmean.

Microsofts 6-page paper explodes: ternary LLM, so delicious!

This function first scales according to the average absolute value of the weight matrix, and then rounds each value to the nearest integer (-1, 0, 1).

The next step is activation quantization(activation quantization).

The quantization of activation values ​​is the same as the implementation in BitNet, but the activation values ​​are not scaled to the range [0, Qb] before the nonlinear function. Instead, the activations are scaled to the range [−Qb, Qb] to eliminate zero-point quantization.

It is worth mentioning that in order to make BitNet b1.58 compatible with the open source community, the research team adopted components of the LLaMA model, such as RMSNorm, SwiGLU, etc., so that it can be easily integrated into mainstream open source software.

Finally, in terms of experimental performance comparison, the team compared BitNet b1.58 and FP16 LLaMA LLM on models of different sizes.

Microsofts 6-page paper explodes: ternary LLM, so delicious!

The results show that BitNet b1.58 starts to match the full-precision LLaMA LLM in perplexity at 3B model size, while having better performance in latency, memory usage and throughput. Significantly improved.

And when the model size becomes larger, this performance improvement will become more significant.

Netizen: Can run 120B large model on consumer-grade GPU

As mentioned above, the unique method of this study has caused a lot of heated discussion on the Internet.

DeepLearning.scala author Yang Bo said:

Compared with the original BitNet, the biggest feature of BitNet b1.58 is that it allows 0 parameters. I think that by slightly modifying the quantization function, we may be able to control the proportion of 0 parameters. When the proportion of 0 parameters is large, the weights can be stored in a sparse format, so that the average video memory occupied by each parameter is even less than 1 bit. This is equivalent to a weight-level MoE. I think it's more elegant than regular MoE.

At the same time, he also raised the shortcomings of BitNet:

The biggest shortcoming of BitNet is that although it can reduce the memory overhead during inference, the optimizer state and gradient still use floating point numbers. , training is still very memory intensive. I think if BitNet can be combined with technology that saves video memory during training, then compared to traditional half-precision networks, it can support more parameters with the same computing power and video memory, which will have great advantages.

The current way to save the graphics memory overhead of the optimizer state is offloading. A way to save the memory usage of gradients may be ReLoRA. However, the ReLoRA paper experiment only used a model with one billion parameters, and there is no evidence that it can be generalized to models with tens or hundreds of billions of parameters.

Microsofts 6-page paper explodes: ternary LLM, so delicious!
##△Picture source: Zhihu, quoted with permission

However, some netizens analyzed that:

If the paper is established, Then we can run a 120B large model on a 24GB consumer-grade GPU.

Microsofts 6-page paper explodes: ternary LLM, so delicious!
Microsofts 6-page paper explodes: ternary LLM, so delicious!

So what do you think of this new approach?

The above is the detailed content of Microsoft's 6-page paper explodes: ternary LLM, so delicious!. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Let's Dance: Structured Movement To Fine-Tune Our Human Neural NetsLet's Dance: Structured Movement To Fine-Tune Our Human Neural NetsApr 27, 2025 am 11:09 AM

Scientists have extensively studied human and simpler neural networks (like those in C. elegans) to understand their functionality. However, a crucial question arises: how do we adapt our own neural networks to work effectively alongside novel AI s

New Google Leak Reveals Subscription Changes For Gemini AINew Google Leak Reveals Subscription Changes For Gemini AIApr 27, 2025 am 11:08 AM

Google's Gemini Advanced: New Subscription Tiers on the Horizon Currently, accessing Gemini Advanced requires a $19.99/month Google One AI Premium plan. However, an Android Authority report hints at upcoming changes. Code within the latest Google P

How Data Analytics Acceleration Is Solving AI's Hidden BottleneckHow Data Analytics Acceleration Is Solving AI's Hidden BottleneckApr 27, 2025 am 11:07 AM

Despite the hype surrounding advanced AI capabilities, a significant challenge lurks within enterprise AI deployments: data processing bottlenecks. While CEOs celebrate AI advancements, engineers grapple with slow query times, overloaded pipelines, a

MarkItDown MCP Can Convert Any Document into Markdowns!MarkItDown MCP Can Convert Any Document into Markdowns!Apr 27, 2025 am 09:47 AM

Handling documents is no longer just about opening files in your AI projects, it’s about transforming chaos into clarity. Docs such as PDFs, PowerPoints, and Word flood our workflows in every shape and size. Retrieving structured

How to Use Google ADK for Building Agents? - Analytics VidhyaHow to Use Google ADK for Building Agents? - Analytics VidhyaApr 27, 2025 am 09:42 AM

Harness the power of Google's Agent Development Kit (ADK) to create intelligent agents with real-world capabilities! This tutorial guides you through building conversational agents using ADK, supporting various language models like Gemini and GPT. W

Use of SLM over LLM for Effective Problem Solving - Analytics VidhyaUse of SLM over LLM for Effective Problem Solving - Analytics VidhyaApr 27, 2025 am 09:27 AM

summary: Small Language Model (SLM) is designed for efficiency. They are better than the Large Language Model (LLM) in resource-deficient, real-time and privacy-sensitive environments. Best for focus-based tasks, especially where domain specificity, controllability, and interpretability are more important than general knowledge or creativity. SLMs are not a replacement for LLMs, but they are ideal when precision, speed and cost-effectiveness are critical. Technology helps us achieve more with fewer resources. It has always been a promoter, not a driver. From the steam engine era to the Internet bubble era, the power of technology lies in the extent to which it helps us solve problems. Artificial intelligence (AI) and more recently generative AI are no exception

How to Use Google Gemini Models for Computer Vision Tasks? - Analytics VidhyaHow to Use Google Gemini Models for Computer Vision Tasks? - Analytics VidhyaApr 27, 2025 am 09:26 AM

Harness the Power of Google Gemini for Computer Vision: A Comprehensive Guide Google Gemini, a leading AI chatbot, extends its capabilities beyond conversation to encompass powerful computer vision functionalities. This guide details how to utilize

Gemini 2.0 Flash vs o4-mini: Can Google Do Better Than OpenAI?Gemini 2.0 Flash vs o4-mini: Can Google Do Better Than OpenAI?Apr 27, 2025 am 09:20 AM

The AI landscape of 2025 is electrifying with the arrival of Google's Gemini 2.0 Flash and OpenAI's o4-mini. These cutting-edge models, launched weeks apart, boast comparable advanced features and impressive benchmark scores. This in-depth compariso

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function