search
HomeTechnology peripheralsAI'Putting' a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

The performance of pre-trained large language models (LLM) on specific tasks continues to improve. Subsequently, if the prompt instructions are appropriate, it can be better generalized to more tasks. Many people will This phenomenon is attributed to the increase in training data and parameters. However, recent trends show that researchers are focusing more on smaller models, but these models are trained on more data and are therefore easier to infer. use.

For example, LLaMA with a parameter size of 7B was trained on 1T tokens. Although the average performance is slightly lower than GPT-3, the parameter size is 1/25 of the latter. Not only that, but current compression technology can further compress these models, significantly reducing memory requirements while maintaining performance. With such improvements, well-performing models can be deployed on end-user devices such as laptops.

However, this faces another challenge, which is how to compress these models into a small enough size to fit these devices, while taking into account the generation quality. Research shows that while compressed models generate answers with acceptable accuracy, existing 3-4-bit quantization techniques still degrade accuracy. Since LLM generation is performed sequentially and relies on previously generated tokens, small relative errors accumulate and lead to severe output corruption. To ensure reliable quality, it is critical to design low bit-width quantization methods that do not degrade prediction performance compared to 16-bit models.

However, quantizing each parameter to 3-4 bits often results in moderate or even high accuracy loss, especially those 1-10B that are well suited for edge deployments Smaller model within parameter range.

In order to solve the accuracy problem, researchers from the University of Washington, ETH Zurich and other institutions proposed a new compression format and quantization technology SpQR (Sparse-Quantized Representation, sparse - quantified representation), achieving near-lossless compression of LLM across model scales for the first time while achieving similar compression levels to previous methods.

SpQR works by identifying and isolating anomalous weights that cause particularly large quantization errors, storing them with higher precision while compressing all other weights. To position 3-4, less than 1% perplexity relative accuracy loss is achieved in LLaMA and Falcon LLMs. This allows a 33B parameter LLM to be run on a single 24GB consumer GPU without any performance degradation while being 15% faster.

The SpQR algorithm is efficient and can both encode weights into other formats and decode them efficiently at runtime. Specifically, this research provides SpQR with an efficient GPU inference algorithm that can perform inference faster than 16-bit baseline models while achieving over 4x memory compression gains.

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

  • Paper address: https://arxiv.org/pdf/2306.03078.pdf
  • Project address: https://github.com/Vahe1994/SpQR
##Method

This research proposes a new format for hybrid sparse quantization - Sparse Quantization Representation (SpQR), which can compress accurately pre-trained LLM to 3-4 bits per parameter while remaining nearly lossless.

Specifically, the study divided the entire process into two steps. The first step is outlier detection: the study first isolates the outlier weights and demonstrates that their quantization leads to high errors: outlier weights are maintained with high precision, while other weights are stored with low precision (e.g. in a 3-bit format). The study then implements a variant of grouped quantization with very small group sizes and shows that the quantization scale itself can be quantized into a 3-bit representation.

SpQR greatly reduces the memory footprint of LLM without sacrificing accuracy, while producing LLM 20%-30% faster compared to 16-bit inference.

In addition, the study found that the positions of sensitive weights in the weight matrix are not random, but have a specific structure. To highlight its structure during quantification, the study calculated the sensitivity of each weight and visualized these weight sensitivities for the LLaMA-65B model. Figure 2 below depicts the output projection of the last self-attention layer of LLaMA-65B.

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

The study made two changes to the quantification process: one to capture small sensitive weight groups; Used to capture individual outliers. Figure 3 below shows the overall architecture of SpQR:

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

The following table shows the SpQR quantification algorithm. The code fragment on the left describes the entire process, the code snippet on the right contains subroutines for secondary quantification and finding outliers:

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

Experiment

This study will SpQR is compared with two other quantization schemes: GPTQ, RTN (rounding-to-nearest), and two metrics are used to evaluate the performance of the quantization model. The first is the measurement of perplexity, using data sets including WikiText2, Penn Treebank and C4; the second is the zero-sample accuracy on five tasks: WinoGrande, PiQA, HellaSwag, ARC-easy, ARC-challenge.

Main results. Figure 1 results show that at similar model sizes, SpQR performs significantly better than GPTQ (and corresponding RTN), especially on smaller models. This improvement is due to SpQR achieving more compression while also reducing loss degradation.

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

Table 1, Table 2 The results show that for 4-bit quantization, the error of SpQR relative to the 16-bit baseline is halved compared to GPTQ.

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

##Table 3 reports the LLaMA-65B model Perplexity results on different data sets.

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

Finally, the study evaluates SpQR inference speed. This study compares a specially designed sparse matrix multiplication algorithm with the algorithm implemented in PyTorch (cuSPARSE), and the results are shown in Table 4. As you can see, although standard sparse matrix multiplication in PyTorch is not faster than 16-bit inference, the sparse matrix multiplication algorithm specially designed in this article can improve the speed by about 20-30%.

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

The above is the detailed content of 'Putting' a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
How to Build Your Personal AI Assistant with Huggingface SmolLMHow to Build Your Personal AI Assistant with Huggingface SmolLMApr 18, 2025 am 11:52 AM

Harness the Power of On-Device AI: Building a Personal Chatbot CLI In the recent past, the concept of a personal AI assistant seemed like science fiction. Imagine Alex, a tech enthusiast, dreaming of a smart, local AI companion—one that doesn't rely

AI For Mental Health Gets Attentively Analyzed Via Exciting New Initiative At Stanford UniversityAI For Mental Health Gets Attentively Analyzed Via Exciting New Initiative At Stanford UniversityApr 18, 2025 am 11:49 AM

Their inaugural launch of AI4MH took place on April 15, 2025, and luminary Dr. Tom Insel, M.D., famed psychiatrist and neuroscientist, served as the kick-off speaker. Dr. Insel is renowned for his outstanding work in mental health research and techno

The 2025 WNBA Draft Class Enters A League Growing And Fighting Online HarassmentThe 2025 WNBA Draft Class Enters A League Growing And Fighting Online HarassmentApr 18, 2025 am 11:44 AM

"We want to ensure that the WNBA remains a space where everyone, players, fans and corporate partners, feel safe, valued and empowered," Engelbert stated, addressing what has become one of women's sports' most damaging challenges. The anno

Comprehensive Guide to Python Built-in Data Structures - Analytics VidhyaComprehensive Guide to Python Built-in Data Structures - Analytics VidhyaApr 18, 2025 am 11:43 AM

Introduction Python excels as a programming language, particularly in data science and generative AI. Efficient data manipulation (storage, management, and access) is crucial when dealing with large datasets. We've previously covered numbers and st

First Impressions From OpenAI's New Models Compared To AlternativesFirst Impressions From OpenAI's New Models Compared To AlternativesApr 18, 2025 am 11:41 AM

Before diving in, an important caveat: AI performance is non-deterministic and highly use-case specific. In simpler terms, Your Mileage May Vary. Don't take this (or any other) article as the final word—instead, test these models on your own scenario

AI Portfolio | How to Build a Portfolio for an AI Career?AI Portfolio | How to Build a Portfolio for an AI Career?Apr 18, 2025 am 11:40 AM

Building a Standout AI/ML Portfolio: A Guide for Beginners and Professionals Creating a compelling portfolio is crucial for securing roles in artificial intelligence (AI) and machine learning (ML). This guide provides advice for building a portfolio

What Agentic AI Could Mean For Security OperationsWhat Agentic AI Could Mean For Security OperationsApr 18, 2025 am 11:36 AM

The result? Burnout, inefficiency, and a widening gap between detection and action. None of this should come as a shock to anyone who works in cybersecurity. The promise of agentic AI has emerged as a potential turning point, though. This new class

Google Versus OpenAI: The AI Fight For StudentsGoogle Versus OpenAI: The AI Fight For StudentsApr 18, 2025 am 11:31 AM

Immediate Impact versus Long-Term Partnership? Two weeks ago OpenAI stepped forward with a powerful short-term offer, granting U.S. and Canadian college students free access to ChatGPT Plus through the end of May 2025. This tool includes GPT‑4o, an a

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
1 months agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
1 months agoBy尊渡假赌尊渡假赌尊渡假赌
Will R.E.P.O. Have Crossplay?
1 months agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Atom editor mac version download

Atom editor mac version download

The most popular open source editor