Home  >  Article  >  Technology peripherals  >  Let large models no longer be "big Mac". This is the latest review of efficient fine-tuning of large model parameters.

Let large models no longer be "big Mac". This is the latest review of efficient fine-tuning of large model parameters.

王林
王林forward
2024-04-28 16:04:011060browse

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com.

Recently, large-scale AI models such as large language models and Vincentian graph models have developed rapidly. Under this situation, how to adapt to rapidly changing needs and quickly adapt large models to various downstream tasks has become an important challenge. Limited by computing resources, traditional full-parameter fine-tuning methods may be insufficient, so more efficient fine-tuning strategies need to be explored. The above challenges have given rise to the recent rapid development of parameter efficient fine-tuning (PEFT) technology.

In order to comprehensively summarize the development history of PEFT technology and keep up with the latest research progress, recently, researchers from Northeastern University, University of California, Riverside, Arizona State University and New York University The researchers investigated, organized and summarized the application and development prospects of parameter efficient fine-tuning (PEFT) technology on large models, and summarized it into a comprehensive and cutting-edge review.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

Paper link: https://arxiv.org/pdf/2403.14608.pdf

PEFT provides a Efficient downstream task adaptation method for pre-trained models. By fixing most of the pre-training parameters and fine-tuning a few parameters, large models can be deployed lightly and quickly adapt to various downstream tasks, making large models no longer "giant". No tyrant".

The full text is 24 pages long, covering nearly 250 latest documents. It has been cited by Stanford University, Peking University and other institutions as soon as it was released, and has been published on various platforms. Quite a bit of heat.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

Specifically, this review focuses on PEFT algorithm classification, efficient PEFT design, PEFT cross-domain application, and PEFT system design and deployment At four levels, the development history of PEFT and its latest progress are comprehensively and carefully explained. Whether you are a practitioner in related industries or a beginner in the field of large model fine-tuning, this review can serve as a comprehensive learning guide.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

1. PEFT background introduction

#The paper first starts with the recently popular LLaMA model As a representative, the architecture and calculation process of large language models (LLM) and other Transformer-based models are analyzed and elaborated, and the required symbolic representations are defined to facilitate the analysis of various PEFT technologies in the following.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

In addition, the author also outlines the classification method of PEFT algorithm. The author divides the PEFT algorithm into additive fine-tuning, selective fine-tuning, heavy-parameterized fine-tuning and hybrid fine-tuning according to different operations. Figure 3 shows the classification of PEFT algorithms and the specific algorithm names included in each category. The specific definitions of each category will be explained in detail later.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

In the background section, the author also introduces common downstream benchmarks and data sets used to verify the performance of the PEFT method, making it easier for readers to become familiar with common task settings.

2. PEFT method classification

The author first gives additive fine-tuning, selective fine-tuning, and heavy parameters The definition of fine-tuning and hybrid fine-tuning:

  • Additive fine-tuning By adding learnable parameters at specific positions of the pre-trained model Modules or parameters to minimize the number of trainable parameters of the model when adapting to downstream tasks.
  • Selective fine-tuningIn the fine-tuning process, only a part of the parameters in the model are updated, while the remaining parameters are kept fixed. Compared with additive fine-tuning, selective fine-tuning does not require changing the architecture of the pre-trained model.
  • Re-parameterized fine-tuning is used for training by building a (low-rank) representation of the parameters of the pre-trained model. During inference, the parameters will be equivalently converted into the pre-trained model parameter structure to avoid introducing additional inference delays.

The distinction between the three is shown in Figure 4:

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

Hybrid fine-tuning combines the advantages of various PEFT methods and analyzes the similarities of different methods to build a unified PEFT architecture or find optimal PEFT hyperparameters.

Next, the author further subdivides each PEFT category:

A. Additive fine-tuning:

1) Adapter

Adapter achieves efficient fine-tuning of parameters by adding a small Adapter layer within the Transformer block. Each Adapter layer contains a down-projection matrix, an activation function, and an up-projection matrix. The down projection matrix maps the input features to the bottleneck dimension r, and the up projection matrix maps the bottleneck features back to the original dimension d.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

Figure 5 shows three typical insertion strategies of the Adapter layer in the model. The Serial Adapter is inserted sequentially after the Transformer module, and the Parallel Adapter is inserted next to the Transformer module in parallel. CoDA is a sparse Adapter method. For important tokens, CoDA uses both the pre-trained Transformer module and the Adapter branch for reasoning; for unimportant tokens, CoDA only uses the Adapter branch for reasoning to save computing overhead.

2) Soft Prompt

Soft Prompt adds a learnable vector to the head of the input sequence to Achieve efficient fine-tuning of parameters. Representative methods include Prefix-tuning and Prompt Tuning. Prefix-tuning enables fine-tuning of the model representation by adding learnable vectors in front of the key, value, and query matrices of each Transformer layer. Prompt Tuning only inserts learnable vectors in the first word vector layer to further reduce training parameters.

3) Others

In addition to the above two classifications, there are also some PEFT methods that are also introduced during the training process new parameters.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

The two typical methods are shown in Figure 6. (IA) 3 introduces three scaling vectors for adjusting keys, values, and activations of feedforward networks. SSF adjusts the activation value of the model through linear transformation. After each step, SSF adds an SSF-ADA layer to enable scaling and translation of activation values.

B. Selective fine-tuning:

1) Unstructured mask

This type of method determines the parameters that can be fine-tuned by adding a learnable binary mask to the model parameters. Many works, such as Diff pruning, FishMask, and LT-SFT, etc., focus on computing the position of the mask.

2) Structured mask

Unstructured mask has no restrictions on the shape of the mask, but This leads to inefficiency in its impact. Therefore, some works, such as FAR, S-Bitfit, Xattn Tuning, etc., impose structured restrictions on the shape of the mask. The difference between the two is shown in the figure below:

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

C. Re-parameterized fine-tuning:

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

1) Low-rank decomposition

This kind of method is achieved by finding Various low-dimensional reparameterized forms of pre-trained weight matrices to represent the entire parameter space for fine-tuning. The most typical method is LoRA, which constructs a low-rank representation of the original model parameters for training by adding two additional up- and down-projection matrices. After training, additional parameters can be seamlessly merged into pre-trained weights to avoid introducing additional inference overhead. DoRA decouples the weight matrix into modular length and direction, and leverages LoRA to fine-tune the direction matrix.

2) LoRA derivation method

The author divides the LoRA derivation method into dynamic selection of the rank of LoRA and LoRA Improvement in all aspects.
In LoRA dynamic rank, the typical method is DyLoRA, which constructs a series of ranks for simultaneous training during the training process, thus reducing the resources spent on finding the optimal rank.

In the improvement of LoRA, the author lists the shortcomings of traditional LoRA in various aspects and the corresponding solutions.

D. Hybrid fine-tuning:

This part studies how to integrate different PEFT technologies into a unified model and find An optimal design pattern. In addition, some solutions using neural architecture search (NAS) to obtain optimal PEFT training hyperparameters are also introduced.

3. Efficient PEFT design

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

In this section, the author discusses research on improving the efficiency of PEFT, focusing on the latency and peak memory overhead of its training and inference. The author mainly describes how to improve the efficiency of PEFT from three perspectives. They are:

PEFT pruning strategy: It combines neural network pruning technology and PEFT technology to further improve efficiency. Representative tasks include AdapterDrop, SparseAdapter, etc.

PEFT quantification strategy: That is, reducing the model size by reducing the model accuracy, thereby improving computational efficiency. When combined with PEFT, the main difficulty is how to better take into account the pre-training weights and the quantization processing of the new PEFT module. Representative works include QLoRA, LoftQ, etc.

Memory-efficient PEFT design: Although PEFT can update only a small number of parameters during training, due to the need for gradient calculation and backpropagation, Its memory footprint is still large. To deal with this challenge, some methods try to reduce memory overhead by bypassing the gradient calculation inside the pre-trained weights, such as Side-Tuning and LST. At the same time, other methods try to avoid backpropagation within the LLM to solve this problem, such as HyperTuning, MeZO, etc.

4. Cross-field applications of PEFT

In this chapter, the author The applications of PEFT in different fields are explored, and how to design better PEFT methods to improve the performance of specific models or tasks is discussed. This section mainly focuses on various large-scale pre-trained models, including LLM, visual Transformer (ViT), visual text model, and diffusion model, and describes in detail the role of PEFT in downstream task adaptation of these pre-trained models.

In terms of LLM, the author introduces how to use PEFT to fine-tune LLM to accept visual instruction input, representative work such as LLaMA-Adapter. In addition, the author also explores the application of PEFT in continuous learning of LLM and mentions how to fine-tune LLM with PEFT to expand its context window.

For ViT, the author describes how to use PEFT technology to adapt it to downstream image recognition tasks, and how to use PEFT to give ViT video recognition capabilities.

In terms of visual text models, the author introduced many works applying PEFT to fine-tune visual text models for open-set image classification tasks.

For the diffusion model, the authors identify two common scenarios: how to add additional inputs besides text, and how to achieve personalized generation, and describe each in PEFT here Applications in two types of tasks.

5. System design challenges of PEFT

In this chapter, the author First, the challenges faced by PEFT systems based on cloud services are described. It mainly includes the following points:

Centralized PEFT query service: In this mode, the cloud server stores a single LLM model copy and multiple PEFT module. According to the task requirements of different PEFT queries, the cloud server will select the corresponding PEFT module and integrate it with the LLM model.

Distributed PEFT query service: In this mode, the LLM model is stored on the cloud server, while the PEFT weights and data sets are stored on the user on the device. The user device uses the PEFT method to fine-tune the LLM model, and then uploads the fine-tuned PEFT weights and data set to the cloud server.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.Multiple PEFT training: Challenges include how to manage memory gradients and model weight storage, and how to design an efficient kernel to train PEFT in batches, etc.


In view of the above system design challenges, the author lists three detailed system design cases to provide a more in-depth analysis of these challenges and feasible solution strategies.

Offsite-Tuning: Mainly solves the data privacy dilemma and the problem of massive resource consumption when fine-tuning LLM.

PetS: Provides a unified service framework and provides a unified management and scheduling mechanism for the PEFT module.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

PEFT parallel training framework: Introduces two parallel PEFT training frameworks, including S-LoRA and Punica, and how they improve the training efficiency of PEFT.

6. Future research directions

The author believes that although PEFT technology has been used in many The downstream mission was successful, but there are still some shortcomings that need to be addressed in future work.

Establish a unified evaluation benchmark: Although some PEFT libraries already exist, there is a lack of a comprehensive benchmark to fairly compare the effectiveness and efficiency of different PEFT methods. . Establishing a recognized benchmark will foster innovation and collaboration within the community.

Enhance training efficiency: PEFT The amount of trainable parameters during training is not always consistent with the computational and memory savings during training . As discussed in the Efficient PEFT Design section, future research could further explore ways to optimize memory and computational efficiency.

Exploring the Law of Scaling: Many PEFT techniques are implemented on smaller Transformer models, and their effectiveness is not necessarily applicable to today's Various models with large parameter quantities. Future research could explore how to adapt the PEFT method to large models.

Serve more models and tasks: With the emergence of more large-scale models, such as Sora, Mamba, etc., PEFT technology can unlock new applications Scenes. Future research could focus on designing PEFT methods for specific models and tasks.

Enhanced Data Privacy: Centralized systems may face data privacy issues when serving or fine-tuning personalized PEFT modules. Future research could explore encryption protocols to protect personal data and intermediate training/inference results.

PEFT and model compression: The impact of model compression techniques such as pruning and quantization on the PEFT method has not been fully studied. Future research could focus on how the compressed model adapts to the performance of the PEFT method.

The above is the detailed content of Let large models no longer be "big Mac". This is the latest review of efficient fine-tuning of large model parameters.. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:jiqizhixin.com. If there is any infringement, please contact admin@php.cn delete