search
HomeTechnology peripheralsAIAre You Still Using LoRA to Fine-Tune Your LLM?

LoRA (Low Rank Adaptive - arxiv.org/abs/2106.09685) is a popular technology that is cost-effective and fine-tuned large language models (LLM). But in 2024, a large number of new parameter efficient fine-tuning technologies emerged, and various LoRA alternatives emerged one after another: SVF, SVFT, MiLoRA, PiSSA, LoRA-XS?... Most of them are based on a matrix technology that I like very much: Singular Value Decomposition (SVD). Let's dive into it in depth.

LoRA

The initial insight from LoRA is that all weights of fine-tuning models are over-operated. Instead, LoRA freezes the model and trains only a pair of small low-rank "adapters" matrices. See the illustration below (where W is any weight matrix in Transformer LLM).

Are You Still Using LoRA to Fine-Tune Your LLM? Since there are much less gradients to be computed and stored, memory and computation cycles can be saved. For example, this is a Gemma 8B model that uses LoRA fine-tuning to simulate how pirates speak: only 22 million parameters can be trained, and 8.5 billion parameters remain frozen.

Are You Still Using LoRA to Fine-Tune Your LLM? LoRA is very popular. It has even entered mainstream ML frameworks such as Keras as a single-line API:

 <code>gemma.backbone.enable_lora(rank=8)</code>

But is LoRA the best? Researchers have been working to improve the formula. In fact, there are many ways to choose a smaller “adapter” matrix. Since most of them cleverly utilize the Singular Value Decomposition (SVD) of the matrix, let's pause for a little bit of math.

SVD: Simple Mathematics

SVD is a good tool for understanding matrix structure. This technique decomposes the matrix into three: W = USV T , where U and V are orthogonal (i.e., basis transform), and S is a diagonal matrix of sorted singular values. This decomposition always exists.

Are You Still Using LoRA to Fine-Tune Your LLM? In the "textbook" SVD, U and V are square matrixes, while S is a rectangular matrix with singular values ​​on the diagonal and zeros followed by. In fact, you can use square matrix S and rectangles U or V - see picture - the truncated part is just multiplied by zero. This "economy" SVD is used in common libraries such as numpy.linalg.svd.

So how do we use this to more effectively choose the weights to train? Let's take a quick look at five recent SVD-based low-rank fine-tuning techniques with commentary instructions.

SVF

The easiest alternative to LoRA is to use SVD on the weight matrix of the model and then fine-tune the singular values ​​directly. Strangely, this is the latest technology, called SVF, published in the Transformers² paper (arxiv.org/abs/2501.06252v2).

Are You Still Using LoRA to Fine-Tune Your LLM? SVF is much economical in terms of parameters than LoRA. Furthermore, it makes the fine-tuning model composable. For more information on this, see my Transformers² description here, but combining two SVF fine-tuning models is just an addition operation:

Are You Still Using LoRA to Fine-Tune Your LLM? SVFT

If you need more trainable parameters, the SVFT paper (arxiv.org/abs/2405.19597) explores a variety of methods, first by adding more trainable weights on the diagonal.

Are You Still Using LoRA to Fine-Tune Your LLM? It also evaluates a variety of other alternatives, such as randomly scattering them into the "M" matrix.

Are You Still Using LoRA to Fine-Tune Your LLM? More importantly, the SVFT paper confirms that having more trainable values ​​than diagonals is useful. See the fine-tuning results below.

Are You Still Using LoRA to Fine-Tune Your LLM? Next are several techniques to divide singular values ​​into two groups of "big" and "small". But before we go on, let's pause for a little more SVD math.

More SVD Mathematics

SVD is usually considered to be decomposed into three matrices W=USV T , but it can also be considered as a weighted sum of many rank 1 matrices, weighted by singular values:

Are You Still Using LoRA to Fine-Tune Your LLM? If you want to prove this, use the formula of USV T form and matrix multiplication to express a single matrix element W jk on the one hand, and use the Σ s i u i v i T form on the other hand, and simplify the fact that S is a diagonal, and note that it is the same.

In this representation, it's easy to see that you can split the sum into two parts. And since you can always sort singular values, you can divide them into "big" and "small" singular values.

Going back to the three matrix form W=USV T , this is what segmentation looks like:

Are You Still Using LoRA to Fine-Tune Your LLM? Based on this formula, two papers explore what happens if you only adjust large singular values ​​or only small singular values, i.e. PiSSA and MiLoRA.

PiSSA

PiSSA (main singular values ​​and singular vector adaptation, arxiv.org/abs/2404.02948) claims that you should only adjust the large master values. The mechanism is as follows:

Are You Still Using LoRA to Fine-Tune Your LLM? Excerpted from the paper: "PiSSA aims to approximate the complete fine-tuning by adjusting the main singular components that are believed to capture the nature of the weight matrix. Instead, MiLoRA is designed to adapt to new tasks while maximizing the knowledge of the underlying model."

There is also an interesting discovery on the PiSSA paper: Complete fine-tuning is prone to overfitting. With low rank fine-tuning techniques, you may get better results on absolute values.

Are You Still Using LoRA to Fine-Tune Your LLM? MiLoRA

MiLoRA, on the other hand, claims that you should only adjust the small master value. It uses a similar mechanism to PiSSA:

Are You Still Using LoRA to Fine-Tune Your LLM? Surprisingly, MiLoRA seems to have the upper hand, at least when fine-tuning the mathematical datasets, which may be quite consistent with the original pre-training. It can be argued that PiSSA should be more suitable to further bend the behavior of LLM from its pre-training.

Are You Still Using LoRA to Fine-Tune Your LLM? LoRA-XS

Finally, I want to mention LoRA-XS (arxiv.org/abs/2405.17604). Very similar to PiSSA, but the mechanism is slightly different. It also shows that much fewer parameters than LoRA also yield good results.

Are You Still Using LoRA to Fine-Tune Your LLM? The paper provides a mathematical explanation that this setup is "ideal" in two cases:

  • Cutting the bottom main value from SVD still approximates the weight matrix well
  • Fine-tuning data distribution is close to pre-training data distribution

Both seem to me to doubt, so I won't go into the math in detail. Some results:

Are You Still Using LoRA to Fine-Tune Your LLM? The fundamental assumption seems to be that singular values ​​are divided into "big" and "small", but is that true? I quickly checked the Gemma2 9B on Colab. Bottom line: 99% of the singular values ​​are in the range of 0.1 – 1.1. I'm not sure if it makes sense to divide them into "big" and "small".

Are You Still Using LoRA to Fine-Tune Your LLM? in conclusion

There are many other fine-tuning techniques for efficient parameterization. It is worth mentioning:

  • DoRA (arxiv.org/abs/2402.09353), which divides the weights into size and orientation, and then adjusts those weights.
  • AdaLoRA (arxiv.org/abs/2303.10512), which has a complex mechanism to find the best adjustment rank for a given trainingable weight budget.

My conclusion: To surpass the LoRA standard with 10x parameters, I like the simplicity of the SVF of Transformers². If you need more trainable weights, SVFT is a simple extension. Both use all singular values ​​(full rank, no singular values ​​pruning) and are still cheap?. I wish you a happy fine-tuning!

Note: All illustrations are created by the author or extracted from arxiv.org papers for comments and discussions.

The above is the detailed content of Are You Still Using LoRA to Fine-Tune Your LLM?. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]May 14, 2025 am 05:04 AM

ChatGPT is not accessible? This article provides a variety of practical solutions! Many users may encounter problems such as inaccessibility or slow response when using ChatGPT on a daily basis. This article will guide you to solve these problems step by step based on different situations. Causes of ChatGPT's inaccessibility and preliminary troubleshooting First, we need to determine whether the problem lies in the OpenAI server side, or the user's own network or device problems. Please follow the steps below to troubleshoot: Step 1: Check the official status of OpenAI Visit the OpenAI Status page (status.openai.com) to see if the ChatGPT service is running normally. If a red or yellow alarm is displayed, it means Open

Calculating The Risk Of ASI Starts With Human MindsCalculating The Risk Of ASI Starts With Human MindsMay 14, 2025 am 05:02 AM

On 10 May 2025, MIT physicist Max Tegmark told The Guardian that AI labs should emulate Oppenheimer’s Trinity-test calculus before releasing Artificial Super-Intelligence. “My assessment is that the 'Compton constant', the probability that a race to

An easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTAn easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTMay 14, 2025 am 05:01 AM

AI music creation technology is changing with each passing day. This article will use AI models such as ChatGPT as an example to explain in detail how to use AI to assist music creation, and explain it with actual cases. We will introduce how to create music through SunoAI, AI jukebox on Hugging Face, and Python's Music21 library. Through these technologies, everyone can easily create original music. However, it should be noted that the copyright issue of AI-generated content cannot be ignored, and you must be cautious when using it. Let’s explore the infinite possibilities of AI in the music field together! OpenAI's latest AI agent "OpenAI Deep Research" introduces: [ChatGPT]Ope

What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!May 14, 2025 am 05:00 AM

The emergence of ChatGPT-4 has greatly expanded the possibility of AI applications. Compared with GPT-3.5, ChatGPT-4 has significantly improved. It has powerful context comprehension capabilities and can also recognize and generate images. It is a universal AI assistant. It has shown great potential in many fields such as improving business efficiency and assisting creation. However, at the same time, we must also pay attention to the precautions in its use. This article will explain the characteristics of ChatGPT-4 in detail and introduce effective usage methods for different scenarios. The article contains skills to make full use of the latest AI technologies, please refer to it. OpenAI's latest AI agent, please click the link below for details of "OpenAI Deep Research"

Explaining how to use the ChatGPT app! Japanese support and voice conversation functionExplaining how to use the ChatGPT app! Japanese support and voice conversation functionMay 14, 2025 am 04:59 AM

ChatGPT App: Unleash your creativity with the AI ​​assistant! Beginner's Guide The ChatGPT app is an innovative AI assistant that handles a wide range of tasks, including writing, translation, and question answering. It is a tool with endless possibilities that is useful for creative activities and information gathering. In this article, we will explain in an easy-to-understand way for beginners, from how to install the ChatGPT smartphone app, to the features unique to apps such as voice input functions and plugins, as well as the points to keep in mind when using the app. We'll also be taking a closer look at plugin restrictions and device-to-device configuration synchronization

How do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesHow do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesMay 14, 2025 am 04:56 AM

ChatGPT Chinese version: Unlock new experience of Chinese AI dialogue ChatGPT is popular all over the world, did you know it also offers a Chinese version? This powerful AI tool not only supports daily conversations, but also handles professional content and is compatible with Simplified and Traditional Chinese. Whether it is a user in China or a friend who is learning Chinese, you can benefit from it. This article will introduce in detail how to use ChatGPT Chinese version, including account settings, Chinese prompt word input, filter use, and selection of different packages, and analyze potential risks and response strategies. In addition, we will also compare ChatGPT Chinese version with other Chinese AI tools to help you better understand its advantages and application scenarios. OpenAI's latest AI intelligence

5 AI Agent Myths You Need To Stop Believing Now5 AI Agent Myths You Need To Stop Believing NowMay 14, 2025 am 04:54 AM

These can be thought of as the next leap forward in the field of generative AI, which gave us ChatGPT and other large-language-model chatbots. Rather than simply answering questions or generating information, they can take action on our behalf, inter

An easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTAn easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTMay 14, 2025 am 04:50 AM

Efficient multiple account management techniques using ChatGPT | A thorough explanation of how to use business and private life! ChatGPT is used in a variety of situations, but some people may be worried about managing multiple accounts. This article will explain in detail how to create multiple accounts for ChatGPT, what to do when using it, and how to operate it safely and efficiently. We also cover important points such as the difference in business and private use, and complying with OpenAI's terms of use, and provide a guide to help you safely utilize multiple accounts. OpenAI

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.