ReFT: A Revolutionary Approach to Fine-tuning LLMs
ReFT (Representation Finetuning), introduced in Stanford's May 2024 paper, offers a groundbreaking method for efficiently fine-tuning large language models (LLMs). Its potential was immediately apparent, further highlighted by Oxen.ai's July 2024 experiment fine-tuning Llama3 (8B) on a single Nvidia A10 GPU in just 14 minutes.
Unlike existing Parameter-Efficient Fine-Tuning (PEFT) methods like LoRA, which modify model weights or input, ReFT leverages the Distributed Interchange Intervention (DII) method. DII projects embeddings into a lower-dimensional subspace, enabling fine-tuning through this subspace.
This article first reviews popular PEFT algorithms (LoRA, Prompt Tuning, Prefix Tuning), then explains DII, before delving into ReFT and its experimental results.
Parameter-Efficient Fine-Tuning (PEFT) Techniques
Hugging Face provides a comprehensive overview of PEFT techniques. Let's briefly summarize key methods:
LoRA (Low-Rank Adaptation): Introduced in 2021, LoRA's simplicity and generalizability have made it a leading technique for fine-tuning LLMs and diffusion models. Instead of adjusting all layer weights, LoRA adds low-rank matrices, significantly reducing trainable parameters (often less than 0.3%), accelerating training and minimizing GPU memory usage.
Prompt Tuning: This method uses "soft prompts"—learnable task-specific embeddings—as prefixes, enabling efficient multi-task prediction without duplicating the model for each task.
Prefix Tuning (P-Tuning v2): Addressing limitations of prompt tuning at scale, Prefix Tuning adds trainable prompt embeddings to various layers, allowing task-specific learning at different levels.
LoRA's robustness and efficiency make it the most widely used PEFT method for LLMs. A detailed empirical comparison can be found in this paper.
Distributed Interchange Intervention (DII)
DII is rooted in causal abstraction, a framework using intervention between a high-level (causal) model and a low-level (neural network) model to assess alignment. DII projects both models into subspaces via orthogonal projections, creating an intervened model through rotation operations. A detailed visual example is available here.
The DII process can be mathematically represented as:
where R
represents orthogonal projections, and the distributed alignment search (DAS) optimizes the subspace to maximize the probability of expected counterfactual outputs post-intervention.
ReFT – Representation Finetuning
ReFT intervenes in the model's hidden representation within a lower-dimensional space. The illustration below shows the intervention (phi) applied to layer L and position P:
LoReFT (Low-rank Linear Subspace Reft) introduces a learned projected source:
where h
is the hidden representation, and Rs
edits h
in the low-dimensional space spanned by R
. The LoReFT integration into a neural network layer is shown below:
During LLM fine-tuning, the LLM parameters remain frozen, and only the projection parameters (phi={R, W, b}
) are trained.
Experimental Results
The original ReFT paper presents comparative experiments against full fine-tuning (FT), LoRA, and Prefix Tuning across various benchmarks. ReFT techniques consistently outperform existing methods, reducing parameters by at least 90% while achieving superior performance.
Discussion
ReFT's appeal stems from its superior performance with Llama-family models across diverse benchmarks and its grounding in causal abstraction, which aids model interpretability. ReFT demonstrates that a linear subspace distributed across neurons can effectively control numerous tasks, offering valuable insights into LLMs.
References
- Wu et al., ReFT: Representation Finetuning for Language Models
- Hu et al., LoRA: Low-Rank Adaptation of Large Language Models
- Zhuang et al., Time-Varying LoRA
- Liu et al., P-tuning v2
- Geiger et al., Finding alignments between interpretable causal variables and distributed neural representations
- Lester et al., The power of scale for parameter-efficient prompt tuning
- Pu et al., Empirical analysis of the strengths and weaknesses of Peft techniques for LLMs
(Note: Please replace the bracketed https://www.php.cn/link/6c11cb78b7bbb5c22d5f5271b5494381
placeholders with the actual links to the research papers.)
The above is the detailed content of Is ReFT All We Needed?. For more information, please follow other related articles on the PHP Chinese website!

Vibe coding is reshaping the world of software development by letting us create applications using natural language instead of endless lines of code. Inspired by visionaries like Andrej Karpathy, this innovative approach lets dev

DALL-E 3: A Generative AI Image Creation Tool Generative AI is revolutionizing content creation, and DALL-E 3, OpenAI's latest image generation model, is at the forefront. Released in October 2023, it builds upon its predecessors, DALL-E and DALL-E 2

February 2025 has been yet another game-changing month for generative AI, bringing us some of the most anticipated model upgrades and groundbreaking new features. From xAI’s Grok 3 and Anthropic’s Claude 3.7 Sonnet, to OpenAI’s G

YOLO (You Only Look Once) has been a leading real-time object detection framework, with each iteration improving upon the previous versions. The latest version YOLO v12 introduces advancements that significantly enhance accuracy

The $500 billion Stargate AI project, backed by tech giants like OpenAI, SoftBank, Oracle, and Nvidia, and supported by the U.S. government, aims to solidify American AI leadership. This ambitious undertaking promises a future shaped by AI advanceme

Google's Veo 2 and OpenAI's Sora: Which AI video generator reigns supreme? Both platforms generate impressive AI videos, but their strengths lie in different areas. This comparison, using various prompts, reveals which tool best suits your needs. T

Google DeepMind's GenCast: A Revolutionary AI for Weather Forecasting Weather forecasting has undergone a dramatic transformation, moving from rudimentary observations to sophisticated AI-powered predictions. Google DeepMind's GenCast, a groundbreak

The article discusses AI models surpassing ChatGPT, like LaMDA, LLaMA, and Grok, highlighting their advantages in accuracy, understanding, and industry impact.(159 characters)


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SublimeText3 Mac version
God-level code editing software (SublimeText3)

SublimeText3 English version
Recommended: Win version, supports code prompts!

Zend Studio 13.0.1
Powerful PHP integrated development environment