Home > Article > Technology peripherals > Changing the initialization method of LoRA, Peking University’s new method PiSSA significantly improves the fine-tuning effect
As the number of parameters of large models increases, the cost of fine-tuning the entire model gradually becomes unacceptable.
Therefore, the Peking University research team proposed an efficient parameter fine-tuning method called PiSSA, which exceeds the fine-tuning effect of the currently widely used LoRA on mainstream data sets.
Paper: PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
Paper link : https://arxiv.org/pdf/2404.02948.pdf
Code link: https://github.com/GraphPKU/PiSSA
Figure 1 shows that PiSSA (Figure 1c) is completely consistent with LoRA [1] in terms of model architecture (Figure 1b), but the way to initialize the Adapter is different. LoRA initializes A with Gaussian noise and B with 0s. PiSSA uses Principal Singular values and Singular vectors to initialize the Adapter to initialize A and B.
Figure 1 shows full parameter fine-tuning, LoRA and PiSSA from left to right. Blue represents frozen parameters, orange represents trainable parameters and other initialization methods. Compared with full-parameter fine-tuning, both LoRA and PiSSA significantly reduce the number of trainable parameters. For the same input, the initial outputs of these three methods are exactly equal. However, PiSSA freezes the secondary part of the model and directly fine-tunes the main part (the first r singular values and singular vectors); while LoRA can be regarded as freezing the main part of the model and fine-tuning the noise part.
Compare the fine-tuning effects of PiSSA and LoRA on different tasks
The research team used llama 2-7B, Mistral-7B and Gemma- 7B serves as the base model, with fine-tuning to enhance their math, coding, and conversational capabilities. These include: training on MetaMathQA, verifying the mathematical ability of the model on the GSM8K and MATH data sets; training on CodeFeedBack, verifying the code ability of the model on the HumanEval and MBPP data sets; training on WizardLM-Evol-Instruct, and using MT -Verify the conversational capabilities of the model on Bench. As can be seen from the experimental results in the table below, using the same scale of trainable parameters, the fine-tuning effect of PiSSA significantly surpasses LoRA, and even surpasses full-parameter fine-tuning.
Comparing the effects of PiSSA and LoRA fine-tuning under different amounts of trainable parameters
The research team tested the model on mathematical tasks Conduct ablation experiments on the relationship between the amount of trainable parameters and effects. From Figure 2.1, it can be found that in the early stage of training, the training loss of PiSSA decreases very quickly, while LoRA has a stage where it does not decrease or even increases slightly. In addition, PiSSA's training loss is lower than LoRA throughout, indicating that it fits the training set better. From Figures 2.2, 2.3, and 2.4, we can see that under each setting, PiSSA's loss is always lower than LoRA, and its accuracy is always higher than LoRA. High, PiSSA can catch up with the effect of full parameter fine-tuning using fewer trainable parameters.
Figure 2.1) The loss of PiSSA and LoRA during the training process when the rank is 1. The upper right corner of each figure is the enlarged curve of the first 100 iterations. Among them, PiSSA is represented by the orange line, LoRA is represented by the blue line, and the full-parameter fine-tuning uses the green line to show the final loss as a reference. The phenomenon when the rank is [2,4,8,16,32,64,128] is consistent with this. See the appendix of the article for details.
# and LoRA’s final training loss.
# and the accuracy of the LoRA fine-tuned model on GSM8K.
# and the accuracy of the LoRA fine-tuned model on MATH.Detailed explanation of PiSSA method
Inspired by Intrinsic SAID [2] "Pre-trained large model parameters have low rank", PiSSA performs singular value decomposition on the parameter matrix of the pre-trained model, where the first r singular values and singular vectors are used Initialize the two matrices and of the adapter, ; the remaining singular values and singular vectors are used to construct the residual matrix , such that . Therefore, the parameters in the adapter contain the core parameters of the model, while the parameters in the residual matrix are correction parameters. By fine-tuning the core adapters A and B with smaller parameters and freezing the residual matrix with larger parameters, the effect of approximating full-parameter fine-tuning with very few parameters is achieved.
Although similarly inspired by Intrinsic SAID [1], the principles behind PiSSA and LoRA are completely different.
LoRA believes that the change of the matrix △W before and after fine-tuning of the large model has a very low intrinsic rank r, so the low-rank matrix obtained by multiplying and is used to simulate the change of the model △W. In the initial stage, LoRA uses Gaussian noise to initialize A and 0 to initialize B, so to ensure that the initial capability of the model does not change, and fine-tune A and B to update W. In contrast, PiSSA does not care about △W, but considers W to have a very low intrinsic rank r. Therefore, we directly perform singular value decomposition on W and decompose it into principal components A, B, and residual term , so that . Assume that the singular value decomposition of W is , A and B are initialized using the r singular values and singular vectors with the largest singular values after SVD decomposition:
residual matrix Use the remaining singular values and singular vectors for initialization:
PiSSA directly fine-tunes the low-rank principal components A and B of W and freezes the minor correction terms. Compared with LoRA, which uses Gaussian noise and 0 to initialize adapter parameters and freeze core model parameters, PiSSA converges faster and has better results.
PiSSA is pronounced like "pizza"---If the entire large model is compared to a complete pizza, PiSSA cuts off one corner, and it is the corner with the richest fillings (the main singular value, singular vectors), re-baked (fine-tuned on downstream tasks) into preferred flavors.
Since PiSSA adopts the exact same architecture as LoRA, it can be used as an optional initialization method for LoRA and can be easily modified and called in the peft package (as shown in the following code). The same architecture also allows PiSSA to inherit most of the advantages of LoRA, such as: using 4-bit quantization [3] for the residual model to reduce training overhead; after fine-tuning is completed, the adapter can be merged into the residual model without changing the model architecture of the inference process. ; There is no need to share complete model parameters, only the PiSSA module with a small number of parameters needs to be shared. Users can automatically perform singular value decomposition and assignment by directly loading the PiSSA module; a model can use multiple PiSSA modules at the same time, etc. Some improvements to the LoRA method can also be combined with PiSSA: for example, instead of fixing the rank of each layer, finding the best rank through learning [4]; using PiSSA-guided updates [5] to break through the rank limit, etc.
# 在 peft 包中 LoRA 的初始化方式后面增加了一种 PiSSA 初始化选项:if use_lora:nn.init.normal_(self.lora_A.weight, std=1 /self.r)nn.init.zeros_(self.lora_B.weight) elif use_pissa:Ur, Sr, Vr = svd_lowrank (self.base_layer.weight, self.r, niter=4) # 注意:由于 self.base_layer.weight 的维度是 (out_channel,in_channel, 所以 AB 的顺序相比图示颠倒了一下)self.lora_A.weight = torch.diag (torch.sqrt (Sr)) @ Vh.t ()self.lora_B.weight = Ur @ torch.diag (torch.sqrt (Sr)) self.base_layer.weight = self.base_layer.weight - self.lora_B.weight @ self.lora_A.weight
Comparative experiment on fine-tuning effects of high, medium and low singular values
In order to verify the impact of using different sizes of singular values and singular vectors to initialize the adapter on the model, The researchers used high, medium, and low singular values to initialize the adapters of LLaMA 2-7B, Mistral-7B-v0.1, and Gemma-7B respectively, and then fine-tuned them on the MetaMathQA data set. The experimental results are shown in Figure 3. As can be seen from the figure, the method using primary singular value initialization has the smallest training loss and has higher accuracy on the GSM8K and MATH validation sets. This phenomenon verifies the effectiveness of fine-tuning the main singular values and singular vectors.
Figure 3) From left to right are the training loss, the accuracy on GSM8K, and the accuracy on MATH. Blue represents the largest singular value, orange represents the medium singular value, and green represents the smallest singular value.
Fast Singular Value Decomposition
PiSSA inherits the advantages of LoRA, is easy to use, and has better effects than LoRA. The price is that during the initialization phase, the model needs to be singular value decomposed. Although it only needs to be decomposed once during initialization, it may still require several minutes or even tens of minutes of overhead. Therefore, the researchers used a fast singular value decomposition [6] method to replace the standard SVD decomposition. As can be seen from the experiments in the table below, it only takes a few seconds to approximate the training set fitting effect of the standard SVD decomposition. . Niter represents the number of iterations. The larger the Niter, the longer the time but the smaller the error. Niter = ∞ represents standard SVD. The average error in the table represents the average L_1 distance between A and B obtained by fast singular value decomposition and standard SVD.
Summary and Outlook
This work performs singular value decomposition on the weights of the pre-trained model, by decomposing the most important parameters Used to initialize an adapter named PiSSA and fine-tune this adapter to approximate the effect of fine-tuning the complete model. Experiments show that PiSSA converges faster than LoRA and has better final results. The only cost is the SVD initialization process that takes several seconds.
So, for better training results, are you willing to spend a few more seconds and change the initialization of LoRA to PiSSA with one click?
References
[1] LoRA: Low-Rank Adaptation of Large Language Models
[2] Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning
##[3] QLoRA: Efficient Finetuning of Quantized LLMs
[4] AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
[5] Delta-LoRA: Fine-Tuning High-Rank Parameters with the Delta of Low-Rank Matrices
##[6] Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions
The above is the detailed content of Changing the initialization method of LoRA, Peking University’s new method PiSSA significantly improves the fine-tuning effect. For more information, please follow other related articles on the PHP Chinese website!