


Integrating more than 200 related studies, the latest review of the large model 'lifelong learning' is here

AIxivコラムは、本サイトの学術・技術コンテンツを掲載するコラムです。過去数年間で、このサイトの AIxiv コラムには 2,000 件を超えるレポートが寄せられ、世界中の主要な大学や企業のトップ研究室がカバーされ、学術交流と普及を効果的に促進しています。共有したい優れた作品がある場合は、お気軽に寄稿するか、報告のために当社までご連絡ください。提出電子メール: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com
論文タイトル: 大規模言語モデルの生涯学習に向けて: 調査 機関: 中国南部工科大学 論文アドレス: https://arxiv.org/abs/2406.06391 プロジェクトアドレス: https://github .com/qianlima-lab/awesome-lifelong-learning-methods-for-llm
Novel classification: introduction A detailed structured framework was developed that divided the extensive literature on lifelong learning into 12 scenarios; Universal techniques: Common techniques for all lifelong learning situations were identified and present There is literature divided into different technical groups in each scenario; Future directions: Emphasis on some emerging technologies such as model extension and data selection, which were less explored in the pre-LLM era .
Internal knowledge refers to the absorption of new knowledge into model parameters through full or partial training, including continuous pre-training and continuous fine-tuning. -
External knowledge refers to incorporating new knowledge from external resources such as Wikipedia or application program interfaces into the model without updating the model parameters, including retrieval-based lifelong learning and Tools for lifelong learning.
Continual Vertical Domain Pretraining: for specific vertical fields (such as finance, medical etc.). Continual Language Domain Pretraining: Continuous pretraining for natural language and code language. Continual Temporal Domain Pretraining: Continuous pretraining for time-related data (such as time series data).
Task Specific:
Continuous Text Classification: For text classification tasks Continuous fine-tuning. Continual Named Entity Recognition: Continuous fine-tuning for named entity recognition tasks. Continuous Relation Extraction: Continuous fine-tuning for relation extraction tasks. Continuous Machine Translation: Continuous fine-tuning for machine translation tasks.
Task Agnostic:
Continuous Instruction-Tuning: Continuous learning of the model is achieved through instruction fine-tuning. Continuous Knowledge Editing: Continuous learning for knowledge updating. Continuous Alignment: Continuous learning to align the model with new tasks.
Overall Measurement: including Average accuracy (AA) and average incremental accuracy (AIA). AA refers to the average performance of the model after learning all tasks, while AIA takes into account the historical changes after learning each task. Stability Measurement: including forgetting measurement (FGT) and backward transfer (BWT). FGT evaluates the average performance degradation of old tasks, while BWT evaluates the average performance change of old tasks. Plasticity Measurement: including forward transfer (FWD), which is the average improvement in the model's performance on new tasks.
Meaning: This method is used when training new tasks Replay data from previous tasks to consolidate the model's memory of old tasks. Usually, the replayed data is stored in a buffer and used for training together with the data of the current task. Mainly include:
– Experience Replay: Reduce forgetting by saving a part of the data samples of old tasks and reusing these data for training when training new tasks. occurrence.
–Generative Replay: Unlike saving old data, this method uses a generative model to create pseudo samples, thereby introducing knowledge of old tasks into the training of new tasks.
Illustration: Figure 3 shows the process from Task t-1 to Task t. The model is training Task When t, the old data in the buffer (Input t-1 ) is used.
Meaning: This method prevents the model from over-adjusting old task parameters when learning a new task by imposing regularization constraints on the model parameters. Regularization constraints can help the model retain the memory of old tasks. Mainly include:
– Weight Regularization: By imposing additional constraints on model parameters, it limits the modification of important weights when training new tasks, thereby protecting the integrity of old tasks. Knowledge. For example, L2 regularization and Elastic Weight Consolidation (EWC) are common techniques.
–Feature Regularization: Regularization can not only act on weights, but also ensure that the feature distribution between new and old tasks remains stable by limiting the performance of the model in the feature space.
Illustration: Figure 3 shows the process from Task t-1 to Task t. The model is training Task When t, parameter regularization is used to maintain performance on Task t-1.
Meaning: This approach focuses on adapting the model structure to seamlessly integrate new tasks while minimizing interference with previously learned knowledge. It mainly includes the six methods in Figure 4:
–(a) Prompt Tuning: By adding "Soft Prompts" before the input of the model , to guide model generation or classification tasks. This method only requires adjusting a small number of parameters (i.e. prompt words) without changing the backbone structure of the model.
–(b) Prefix Tuning: Add trained adjustable parameters to the prefix part of the input sequence. These parameters are inserted into the self-attention mechanism of the Transformer layer to help the model better Capture contextual information.
–(c) Low-Rank Adaptation (LoRA, Low-Rank Adaptation): LoRA adapts to new tasks by adding low-rank matrices at specific levels without changing the main weights of the large model. This approach greatly reduces the number of parameter adjustments while maintaining model performance.
–(d) Adapters: Adapters are trainable modules inserted between different layers of the model. These modules can adapt with a small number of additional parameters without changing the original model weights. New tasks. Usually applied in the FFN (Feed Forward Network) and MHA (Multi-Head Attention) parts.
–(e) Mixture of Experts: Process different inputs by selectively activating certain “expert” modules, which can be specific layers or subnetworks in the model. The Router module is responsible for deciding which expert module needs to be activated.
–(f) Model Expansion: Expand the capacity of the model by adding a new layer (New Layer) while retaining the original layer (Old Layer). This approach allows the model to gradually increase its capacity to accommodate more complex task requirements.
Illustration: Figure 3 shows the process from Task t-1 to Task t. When the model learns a new task, some parameters are Frozen, while the newly added module is used to train new tasks (Trainable).
Meaning: This method transfers the knowledge of the old model to the new model through knowledge distillation. When training a new task, the new model not only learns the data of the current task, but also imitates the output of the old model for the old task, thereby maintaining the knowledge of the old task. Mainly include:
Illustration: Figure 3 shows the transition from Task t-1 to Task t In the process, when the model trains a new task, it maintains the knowledge of the old task by imitating the prediction results of the old model.
Example: CorpusBrain++ uses a backbone-adapter architecture and experience replay strategy to tackle real-world knowledge-intensive language tasks. Example: Med-PaLM introduces instruction prompt tuning in the medical field by using a small number of examples.
Example: ELLE adopts a feature-preserving model expansion strategy to improve the efficiency of knowledge acquisition and integration by flexibly expanding the width and depth of existing pre-trained language models. Example: LLaMA Pro excels in general use, programming and math tasks by extending the Transformer block and fine-tuning it with a new corpus.
-
Example: The strategy proposed by Gupta et al. adjusts the learning rate when introducing new data sets to prevent the learning rate from being too low during long-term training, thereby improving the effect of adapting to new data sets.
Example: RHO-1 is trained with a Selective Language Model (SLM), which prioritizes tokens that have a greater impact on the training process. Example: EcomGPT-CT enhances model performance on domain-specific tasks with semi-structured e-commerce data.
Example: Yadav et al. improve prompt tuning by introducing a teacher forcing mechanism, creating a set of prompts to guide the fine-tuning of the model on new tasks. Example: ModuleFormer and Lifelong-MoE use a mixture of experts (MoE) approach to enhance the efficiency and adaptability of LLM through modularity and dynamically increasing model capacity.
-
Example: The rewarming method proposed by Ibrahim et al. helps the model adapt to new languages faster by temporarily increasing the learning rate when training new data.
Example: Continuous text classification task trains the model by gradually introducing new classification categories (such as Intent: Transfer -> Intent: Credit Score -> Intent: Fun Fact) so that it can adapt to changing classification needs.
Example : The continuous named entity recognition task shows how to gradually introduce new entity types (such as Athlete -> Sports Team -> Politician) while recognizing specific entities, so that the model can still maintain the recognition of old entities while recognizing new entities. ability.
Example: The continuous relationship extraction task shows how the model gradually expands its relationship extraction capabilities by continuously introducing new relationship types (such as Relation: Founded By -> Relation: State or Province of Birth -> Relation: Country of Headquarters).
Example: The continuous knowledge editing task ensures that it can accurately answer the latest facts by continuously updating the model's knowledge base (such as Who is the president of the US? -> Which club does Cristiano Ronaldo currently play for? -> Where was the last Winter Olympics held?).
Example: The continuous machine translation task demonstrates the model's adaptability in a multilingual environment by gradually expanding the model's translation capabilities into different languages (such as English -> Chinese, English -> Spanish, English -> French).
Example: The continuous instruction fine-tuning task trains the model's performance capabilities in multiple task types by gradually introducing new instruction types (such as Summarization -> Style Transfer -> Mathematics).
Example: Continuous The alignment task demonstrates the model's continuous learning capabilities under different moral and behavioral standards by introducing new alignment goals (such as Helpful and Harmless -> Concise and Organized -> Positive Sentiment).
Introduction: With the continuous increase of information in the world Scaling up and evolving rapidly, static models trained on historical data quickly become outdated and unable to understand or generate content about new developments. Retrieval-based lifelong learning addresses the critical need for large language models to acquire and assimilate the latest knowledge from external sources, and the model supplements or updates its knowledge base by retrieving these external resources when needed. These external resources provide a large current knowledge base, providing an important complementary asset for enhancing the static properties of pretrained LLMs. Example: These external resources in the diagram are accessible and retrievable by the model. By accessing external information sources such as Wikipedia, books, databases, etc., the model is able to update its knowledge and adapt when encountering new information.
Introduction: Tool-based lifelong learning arises from the necessity to extend its functionality beyond static knowledge and enable it to dynamically interact with the environment. In real-world applications, models are often required to perform tasks that involve operations beyond direct text generation or interpretation. Example: The model in the figure uses these tools to extend and update its own capabilities, enabling lifelong learning through interaction with external tools. For example, models can obtain real-time data through application programming interfaces, or interact with the external environment through physical tools to complete specific tasks or acquire new knowledge.
Catastrophic Forgetting: This is one of the core challenges of lifelong learning, and the introduction of new information may Will overwrite what the model has learned previously. Plasticity-Stability Dilemma: It is very critical to find a balance between maintaining the learning ability and stability of the model, which directly affects the model's ability to acquire new knowledge. while retaining its broad general capabilities. Expensive Computation Cost: The computational requirements for fully fine-tuning a large language model can be very high. Unavailability of model weights or pre-trained data: Due to privacy, proprietary restrictions, or commercial licenses, raw training data or model weights are often unavailable for further improvements.
From specific tasks to general tasks: Research gradually shifts from focusing on specific tasks (such as text classification, named entity recognition) to a wider range of general tasks, such as instruction tuning, knowledge editing, etc. From full fine-tuning to partial fine-tuning: In view of the high resource consumption of full fine-tuning, partial fine-tuning strategies (such as Adapter layer, Prompt tuning, LoRA) are becoming more and more popular. From internal knowledge to external knowledge: In order to overcome the limitations of frequent internal updates, more and more strategies use external knowledge sources, such as Retrieval-Augmented Generation and tools Learning enables models to dynamically access and exploit current external data.
Multimodal lifelong learning: Integrate multiple modalities beyond text (such as images, videos, audios, time series data, knowledge graphs) into lifelong learning to develop more comprehensive and adaptive sexual model. Efficient lifelong learning: Researchers are working on developing more efficient strategies to manage the computational requirements of model training and updates, such as model pruning, model merging, model expansion and other methods. Universal lifelong learning: The ultimate goal is to enable large language models to actively acquire new knowledge and learn through dynamic interaction with the environment, no longer relying solely on static data sets.
The above is the detailed content of Integrating more than 200 related studies, the latest review of the large model 'lifelong learning' is here. For more information, please follow other related articles on the PHP Chinese website!

Running large language models at home with ease: LM Studio User Guide In recent years, advances in software and hardware have made it possible to run large language models (LLMs) on personal computers. LM Studio is an excellent tool to make this process easy and convenient. This article will dive into how to run LLM locally using LM Studio, covering key steps, potential challenges, and the benefits of having LLM locally. Whether you are a tech enthusiast or are curious about the latest AI technologies, this guide will provide valuable insights and practical tips. Let's get started! Overview Understand the basic requirements for running LLM locally. Set up LM Studi on your computer

Guy Peri is McCormick’s Chief Information and Digital Officer. Though only seven months into his role, Peri is rapidly advancing a comprehensive transformation of the company’s digital capabilities. His career-long focus on data and analytics informs

Introduction Artificial intelligence (AI) is evolving to understand not just words, but also emotions, responding with a human touch. This sophisticated interaction is crucial in the rapidly advancing field of AI and natural language processing. Th

Introduction In today's data-centric world, leveraging advanced AI technologies is crucial for businesses seeking a competitive edge and enhanced efficiency. A range of powerful tools empowers data scientists, analysts, and developers to build, depl

This week's AI landscape exploded with groundbreaking releases from industry giants like OpenAI, Mistral AI, NVIDIA, DeepSeek, and Hugging Face. These new models promise increased power, affordability, and accessibility, fueled by advancements in tr

But the company’s Android app, which offers not only search capabilities but also acts as an AI assistant, is riddled with a host of security issues that could expose its users to data theft, account takeovers and impersonation attacks from malicious

You can look at what’s happening in conferences and at trade shows. You can ask engineers what they’re doing, or consult with a CEO. Everywhere you look, things are changing at breakneck speed. Engineers, and Non-Engineers What’s the difference be

Simulate Rocket Launches with RocketPy: A Comprehensive Guide This article guides you through simulating high-power rocket launches using RocketPy, a powerful Python library. We'll cover everything from defining rocket components to analyzing simula


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

Zend Studio 13.0.1
Powerful PHP integrated development environment

SublimeText3 English version
Recommended: Win version, supports code prompts!

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool