search
HomeTechnology peripheralsAI3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

Efficiently decode n-token sequences, CLLMs+Jacobi decoding framework.

Traditionally, large language models (LLMs) are thought of as sequential decoders, decoding each token one by one.

A research team from Shanghai Jiao Tong University and the University of California shows that pre-trained LLMs can be easily taught to become efficient parallel decoders and introduces a new family of parallel decoders called coherence Large Language Models (CLLMs) are able to reduce inference latency by efficiently decoding a n-token sequence at each inference step.

In this paper, the research shows that: "Imitating the cognitive process that humans use to express word-for-word expressions after forming complete sentences in their heads can be effectively learned by simply fine-tuning pre-trained LLMs."

Specifically, CLLMs produce decoding sequences with the same results as autoregressive (AR) decoding by mapping any randomly initialized n-token sequence into as few steps as possible. In this way, parallel decoding training can be performed.

Experimental results show that the CLLMs obtained using the method proposed by the research team are very effective, showing that the method obtains a 2.4- to 3.4-fold improvement in generation speed, and is consistent with other fast inference techniques such as Medusa2 Comparable to Eagle, and requires no additional memory cost to accommodate auxiliary model components during inference.

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

  • Paper name: "CLLMs: Consistency Large Language Models"

  • Paper link: https:/ /arxiv.org/pdf/2403.00835

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

##                                                                   Figure 1: CLLM-ABEL when using Jacobi decoding on GSM8K -7B-001 is a demonstration of approximately 3x the speed of baseline ABEL-7B-001.

Jacobi Decoding

Large language models (LLMs) are changing the face of human life, from programming to providing legal and health advice.

However, during the inference process, LLMs use autoregressive decoding to generate responses token by token, as shown in Figure 1, which results in high latency for longer responses. Using autoregressive decoding often requires architectural modifications, auxiliary components, or first draft models to speed up inference by generating multiple tokens at once.

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

图 2 2: Traditional self -regression (AR) decoding schematic diagram: generate one token at a time.

Jacobi decoding is derived from the method of Jacobi and Gauss-Seidel fixed-point iteration for solving nonlinear equations, and has been proven to be exactly the same as autoregressive generation using greedy decoding.

Jacobi decoding reconstructs the sequential generation process into a system of n nonlinear equations containing n variables, and can be solved in parallel based on Jacobi iteration.

Each iteration step may predict multiple correct tokens (the so-called "correct" refers to aligning with the autoregressive decoding results under the greedy sampling strategy), thereby potentially accelerating autoregressive decoding.

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

##                                                                                                                                                                                                                                                                                      .

Specifically, the Jacobi decoding method first randomly guesses the next token of the sequence from the input prompt (hereinafter referred to as the

n-token sequence, unless otherwise stated ).

Then, the

n -token sequence is fed into the LLM along with the hints for iterative updates. This process will continue until the sequence of n-token stabilizes, no longer changes, and reaches a fixed point.

It is worth noting that Jacobi decoding does not require any more queries to the LLM than autoregressive (AR) decoding. Eventually, the sequence of

n-tokens will converge to the output generated by AR decoding under the greedy strategy. The process from the initial random guess to the final AR generated result is called the "Jacobi trajectory."

An example of the Jacobi decoding iteration process and Jacobi trajectory is illustrated in Figure 2.

Limitations of Jacobi decoding:

However, in practice, ordinary Jacobi decoding only slightly improves the acceleration effect of LLMs. For example, the average acceleration ratio is only 1.05 times. This is because it is difficult for LLM to generate correct tokens when there are errors in previous tokens.

Therefore, most Jacobi iterations can only obtain one correction for a sequence of n -tokens, resulting in the longer trajectory shown on the left side of Figure 3.

Look-ahead decoding and speculative decoding methods attempt to alleviate the inefficiencies of Jacobi decoding and traditional autoregressive decoding, but incur additional memory costs during inference.

CLLMs do not require these additional memory costs.

Consistent Large Language Models (CLLMs)

Preliminary Jacobi decoding:

Given a prompt xAnd a pre-trained LLM p(·|x), usually researchers will use the standard autoregressive (AR) decoding method to obtain the response of the model under the greedy strategy, that is: 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

Jacobi decoding restructures the LLM inference process as a process of solving a system of nonlinear equations to transform the decoding process into a form that can be calculated in parallel. Considering:

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

The researcher can rewrite the above equation as a nonlinear system of equations:

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

Note is:

The process exits at a certain k value such that: 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

Then, define 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here as the fixed point, and 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here as the Jacobi trajectory.

To solve this problem, the research team proposed to adjust the pre-trained LLMs so that they can consistently assign any point on the Jacobi trajectory Jy Maps to fixed point y*.

Surprisingly, they found that such a goal is similar to that of the consistency model—a major acceleration method for diffusion models.

In the method proposed by the team, the model is trained using Jacobi trajectories collected from the target model and uses a loss function that encourages single-step convergence during Jacobi iterations.

For each target model to be adjusted to CLLMp, training includes two parts:

(1) Jacobi trajectory preparation:

For each prompt, the author sequentially performs Jacobi decoding on each token truncation until the entire response sequence l is generated, which is equivalent to all consecutive fixed points of series connection.

Each sequence generated along the trajectory is counted as a data entry.

It should be noted here that for I containing N (N ≫ n) tokens Long responses, this truncation avoids slow model evaluation on long inputs.

(2) Training using consistency and AR loss:

The author jointly optimizes the two losses to adjust CLLMs. The consistency loss ensures that multiple tokens are predicted at one time, while the AR loss prevents CLLM Deviate from target LLM to maintain build quality. 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

图 Figure 4: One-Step Swatching of Concentration Training: Adjust the target LLM into any state on the Jacobi trajectory as an input and always predict the fixed point.

Consistency and AR loss:

(1) Consistency loss

Supposep represents the target LLM.

Let 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here be represented as a CLLM with parameters θ initialized to p.

For prompt x and the corresponding Jacobi trajectory J, let y and y* respectively Represents random states and fixed points on the trajectory.

You can prompt CLLM to output y when the input is y* by minimizing the following loss, which is called Global Consistency (GC) loss

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally hereIn this formula, 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

The author uses symbols extensively to represent uniform sampling from the data set.

D(·||·) represents the distance between two distributions. The choice is discussed in the GKD method. In this article, forward KL is mainly used.

Alternatively, use local consistency (LC) loss according to the formula in the consistency model.

Where adjacent states: 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here In the Jacobi trajectory J , is driven to produce the same output:

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

(2) AR loss:

In order to avoid deviating from the distribution of the target LLM, the author combines the generation based on the target LLM p## Traditional AR loss of #l:

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

By combining the two losses together, using some weights

ω, train CLLM The total loss is:

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

Experiment

Result:

Total Say, the experiment covers three domain-specific tasks:

(1) Spider (Text to SQL)

(2) Human-Eval (Python code completion) and GSM8k (Math)

(3) Wider open domain session challenge MT-bench.

The reported experiments use fine-tuned encoder LLM, Deepseek-coder-7B-instruct, LLaMA-2-7B or ABEL-7B-001 as the target model, depending on the task.

Training and evaluation are performed on NVIDIA A100 40GB server.

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

Figure 5: Acceleration effect of CLLM on different downstream tasks. The results show: "CLLM is significantly faster than the pre-trained model and achieves comparable speedup compared to Medusa, but at no additional cost during inference."

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

Figure 6: Comparison diagram between CLLM and other benchmarks on specific domain tasks (Spider, CSN-Python, GSM8k) and MT-bench. CLLM achieves similar or even better speedups in comparison with Medusa2 while introducing no additional inference cost (judged by FLOPS and memory consumption).

Specialized areas:

From Figure 5, it can be seen that compared with other benchmarks (including the original target model, Medusa2 and guess decoding) In comparison, CLLMs achieve the most significant speedup.

Open Domain Session Challenge (MT-bench):

When CLLM trained from LLaMA2-7B using the ShareGPT dataset is combined with lookahead decoding, It is possible to achieve roughly the same speedup as Medusa2 and obtain comparable scores on MT-bench.

However, CLLM is more adaptable and memory efficient because it does not require modifications to the original architecture of the target model and does not require auxiliary components.

Training Cost:

The fine-tuning cost of CLLMs is modest.

For example, for LLaMA-7B, only passing about 1M tokens can achieve its 3.4x speedup on the Spider dataset. In cases where the dataset size is large (such as for CodeSearchNet-Python), only 10% of the dataset needs to be used to generate Jacobi trajectories for training CLLMs, resulting in an approximately 2.5x speedup.

The total number of tokens can be estimated in the following way:

N = average trajectory amount of each prompt × average trajectory length × number of prompts.

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

Figure 7: Jacobi trajectory comparison between target LLM and CLLM on Spider. Each point on the Jacobi trajectory is a color-coded sequence: correct matches to the AR results are marked in blue, inaccurate ones are marked in red. CLLM exhibits enhanced efficiency, converging to the fixed point 2 times faster than the target LLM. This enhanced efficiency of CLLM can be attributed to the consistency loss, which facilitates the learning of the structure of the n-token sequence for each given prefix.

The left side of Figure 6 shows that the target LLM usually only generates one correct token in one iteration. In contrast, in CLLMs, the authors found the phenomenon of rapid advancement, where multiple consecutive tokens are correctly predicted in a single Jacobi iteration.

In addition, in the target LLM, tokens that are correctly generated in advance (such as "country" and "H" at indexes 6 and 7 on the left side of Figure 7) are often inaccurately replaced in subsequent iterations. .

On the other hand, CLLMs have shown the ability to predict the correct token, ensuring that the token remains unchanged even in the presence of a previous incorrect token.

The author calls such a token a "fixed token". These two phenomena together contribute to the rapid convergence of CLLMs in Jacobi decoding, resulting in considerable generation speed improvements.

The research team also observed that through training, CLLMs acquired a key language concept - collocation: "a series of words or terms that co-occur more frequently than expected by random chance."

Language is not only made up of isolated words, but also relies heavily on specific word pairs. Examples of collocations are abundant in both natural and programming languages.

They include:

  • Verb + preposition combination (such as "talk to", "remind ... of ...")

  • Verb + noun structures (e.g. "make a decision", "catch a cold")

  • Many domain-specific syntactic structures (e.g. "SELECT ... FROM . ..", "if ... else" is used in programming).

The consistency generation goal enables CLLMs to infer such structures from any point in the Jacobi trajectory, facilitating CLLMs to master a large number of collocations and thus predict multiple words simultaneously to minimize iteration steps .

Reference link:

https://hao-ai-lab.github.io/blogs/cllm/

The above is the detailed content of 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Let's Dance: Structured Movement To Fine-Tune Our Human Neural NetsLet's Dance: Structured Movement To Fine-Tune Our Human Neural NetsApr 27, 2025 am 11:09 AM

Scientists have extensively studied human and simpler neural networks (like those in C. elegans) to understand their functionality. However, a crucial question arises: how do we adapt our own neural networks to work effectively alongside novel AI s

New Google Leak Reveals Subscription Changes For Gemini AINew Google Leak Reveals Subscription Changes For Gemini AIApr 27, 2025 am 11:08 AM

Google's Gemini Advanced: New Subscription Tiers on the Horizon Currently, accessing Gemini Advanced requires a $19.99/month Google One AI Premium plan. However, an Android Authority report hints at upcoming changes. Code within the latest Google P

How Data Analytics Acceleration Is Solving AI's Hidden BottleneckHow Data Analytics Acceleration Is Solving AI's Hidden BottleneckApr 27, 2025 am 11:07 AM

Despite the hype surrounding advanced AI capabilities, a significant challenge lurks within enterprise AI deployments: data processing bottlenecks. While CEOs celebrate AI advancements, engineers grapple with slow query times, overloaded pipelines, a

MarkItDown MCP Can Convert Any Document into Markdowns!MarkItDown MCP Can Convert Any Document into Markdowns!Apr 27, 2025 am 09:47 AM

Handling documents is no longer just about opening files in your AI projects, it’s about transforming chaos into clarity. Docs such as PDFs, PowerPoints, and Word flood our workflows in every shape and size. Retrieving structured

How to Use Google ADK for Building Agents? - Analytics VidhyaHow to Use Google ADK for Building Agents? - Analytics VidhyaApr 27, 2025 am 09:42 AM

Harness the power of Google's Agent Development Kit (ADK) to create intelligent agents with real-world capabilities! This tutorial guides you through building conversational agents using ADK, supporting various language models like Gemini and GPT. W

Use of SLM over LLM for Effective Problem Solving - Analytics VidhyaUse of SLM over LLM for Effective Problem Solving - Analytics VidhyaApr 27, 2025 am 09:27 AM

summary: Small Language Model (SLM) is designed for efficiency. They are better than the Large Language Model (LLM) in resource-deficient, real-time and privacy-sensitive environments. Best for focus-based tasks, especially where domain specificity, controllability, and interpretability are more important than general knowledge or creativity. SLMs are not a replacement for LLMs, but they are ideal when precision, speed and cost-effectiveness are critical. Technology helps us achieve more with fewer resources. It has always been a promoter, not a driver. From the steam engine era to the Internet bubble era, the power of technology lies in the extent to which it helps us solve problems. Artificial intelligence (AI) and more recently generative AI are no exception

How to Use Google Gemini Models for Computer Vision Tasks? - Analytics VidhyaHow to Use Google Gemini Models for Computer Vision Tasks? - Analytics VidhyaApr 27, 2025 am 09:26 AM

Harness the Power of Google Gemini for Computer Vision: A Comprehensive Guide Google Gemini, a leading AI chatbot, extends its capabilities beyond conversation to encompass powerful computer vision functionalities. This guide details how to utilize

Gemini 2.0 Flash vs o4-mini: Can Google Do Better Than OpenAI?Gemini 2.0 Flash vs o4-mini: Can Google Do Better Than OpenAI?Apr 27, 2025 am 09:20 AM

The AI landscape of 2025 is electrifying with the arrival of Google's Gemini 2.0 Flash and OpenAI's o4-mini. These cutting-edge models, launched weeks apart, boast comparable advanced features and impressive benchmark scores. This in-depth compariso

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor