search
HomeTechnology peripheralsAIAnalyzing the interpretability of large models: a review reveals the truth and answers doubts

Large-scale language models show surprising reasoning capabilities in natural language processing, but their underlying mechanisms are not yet clear. With the widespread application of large-scale language models, elucidating the operating mechanisms of the models is critical to application security, performance limitations, and controllable social impacts.

Recently, many research institutions in China and the United States (New Jersey Institute of Technology, Johns Hopkins University, Wake Forest University, University of Georgia, Shanghai Jiao Tong University, Baidu, etc. ) jointly released a review of large model interpretability technologies, which comprehensively reviewed the interpretability technologies of traditional fine-tuning models and very large models based on prompting, and discussed the evaluation standards and future research of model explanations. challenge.

Analyzing the interpretability of large models: a review reveals the truth and answers doubts

  • Paper link: https://arxiv.org/abs/2309.01029
  • Github link: https://github.com/hy-zhao23/Explainability-for-Large-Language-Models

Analyzing the interpretability of large models: a review reveals the truth and answers doubts

The difficulty of explaining large models where?

Why is it difficult to explain large models? The amazing performance of large language models on natural language processing tasks has attracted widespread attention from society. At the same time, how to explain the stunning performance of large models across tasks is one of the pressing challenges facing academia. Different from traditional machine learning or deep learning models, the ultra-large model architecture and massive learning materials enable large models to have powerful reasoning and generalization capabilities. Several major difficulties in providing interpretability for large language models (LLMs) include:

  • High model complexity. Different from deep learning models or traditional statistical machine learning models before the LLM era, LLMs models are huge in scale and contain billions of parameters. Their internal representation and reasoning processes are very complex, and it is difficult to explain their specific outputs.
  • Strong data dependence. LLMs rely on large-scale text corpus during the training process. Bias, errors, etc. in these training data may affect the model, but it is difficult to completely judge the impact of the quality of the training data on the model.
  • Black box nature. We usually think of LLMs as black box models, even for open source models such as Llama-2. It is difficult for us to explicitly judge its internal reasoning chain and decision-making process. We can only analyze it based on input and output, which makes interpretability difficult.
  • Output uncertainty. The output of LLMs is often uncertain, and different outputs may be produced for the same input, which also increases the difficulty of interpretability.
  • Insufficient evaluation indicators. The current automatic evaluation indicators of dialogue systems are not enough to fully reflect the interpretability of the model, and more evaluation indicators that consider human understanding are needed.

Training paradigm for large models

##In order to better summarize the interpretability of large models, we divide the training paradigms of large models at BERT and above levels into two types: 1) traditional fine-tuning paradigm; 2) prompting-based paradigm.

Traditional fine-tuning paradigm

For traditional fine The -tuning paradigm first pre-trains a basic language model on a larger unlabeled text library, and then fine-tunes it through labeled data sets from a specific domain. Common such models include BERT, RoBERTa, ELECTRA, DeBERTa, etc.

Prompting-based paradigm

Prompting-based paradigm Implement zero-shot or few-shot learning by using prompts. As with the traditional fine-tuning paradigm, the base model needs to be pre-trained. However, fine-tuning based on prompting paradigm is usually implemented by instruction tuning and reinforcement learning from human feedback (RLHF). Common such models include GPT-3.5, GPT 4, Claude, LLaMA-2-Chat, Alpaca, Vicuna, etc. The training process is as follows:

Analyzing the interpretability of large models: a review reveals the truth and answers doubts

Model interpretation based on the traditional fine-tuning paradigm

Model interpretation based on the traditional fine-tuning paradigm includes the interpretation of individual predictions (local explanation) and explanation of model structural level components such as neurons, network layers, etc. (global explanation).

Local explanation

Local explanation predicts a single sample Explain. Its explanation methods include feature attribution, attention-based explanation, example-based explanation, and natural language explanation.

Analyzing the interpretability of large models: a review reveals the truth and answers doubts

#1. The purpose of feature attribution is to measure the correlation between each input feature (e.g. word, phrase, text range) and the model prediction. Feature attribution methods can be divided into:

  • Perturbation-based interpretation, observing the impact on the output results by modifying specific input features

  • According to the interpretation of the gradient, the partial differential of the output to the input is used as the importance index of the corresponding input

  • Alternative model, use a simple human-understandable model to fit a single component of the complex model Output to obtain the importance of each input;

  • Decomposition-based technology aims to linearly decompose feature correlation scores.

2. Attention-based explanation: Attention is often used as a way to focus on the most relevant parts of the input, so attention may learn relevant information that can be used to explain predictions. Common attention-related explanation methods include:

  • Attention visualization technology to intuitively observe changes in attention scores on different scales;
  • Function-based interpretation, such as outputting the partial differential of attention. However, the use of attention as a research perspective remains controversial in the academic community.

3. Sample-based explanation detects and explains the model from the perspective of individual cases, which is mainly divided into: adversarial samples and counterfactual samples.

  • Adversarial examples are data generated for the characteristics of the model that are very sensitive to small changes. In natural language processing, they are usually obtained by modifying the text, which is difficult for humans to Different text transformations often lead to different predictions from the model.
  • Counterfactual samples are obtained by deforming the text such as negation, which is usually a test of the model's causal inference ability.

#4. Natural language explanation uses original text and manually labeled explanations for model training, so that the model can generate a natural language explanation of the decision-making process of the model.

Global interpretation

##Global interpretation is intended to derive from the model Constitutive levels include neurons, hidden layers, and larger chunks, providing higher-order explanations of how large models work. It mainly explores the semantic knowledge learned in different network components.

  • Probe-based interpretation Probe interpretation technology is mainly based on classifier detection, by training a shallow layer on a pre-trained model or a fine-tuned model The classifier is then evaluated on a holdout dataset, enabling the classifier to identify language features or reasoning abilities.
  • Neuron activation Traditional neuron activation analysis only considers a part of important neurons, and then learns the relationship between neurons and semantic features. Recently, GPT-4 has also been used to explain neurons. Instead of selecting some neurons for explanation, GPT-4 can be used to explain all neurons.
  • Concept-based interpretation The input is first mapped to a set of concepts and then the model is interpreted by measuring the importance of the concepts to the prediction.

Model explanation based on prompting paradigm

Model explanation based on the prompting paradigm requires separate explanations of the basic model and the assistant model to distinguish the capabilities of the two models and explore the path of model learning. The issues explored mainly include: the benefits of providing explanations for the model on few-shot learning; understanding the source of few-shot learning and thinking chain capabilities.

Basic model explanation

    # #The benefits of explanation for model learning Explore whether explanation is helpful for model learning in the case of few-shot learning.
  • Situational Learning Explore the mechanism of situational learning in large models, and distinguish the difference between situational learning in large models and medium models.
  • Thinking chain prompting Explore the thinking chain prompting to improve the performance of the model.

Assistant model explanation

    Fine-tuning's role assistant model is usually pre-trained to obtain general semantic knowledge, and then acquires domain knowledge through supervised learning and reinforcement learning. The stage at which the knowledge of the assistant model mainly comes from remains to be studied.
  • Illusion and Uncertainty The accuracy and credibility of large model predictions are still important topics of current research. Despite the powerful inference capabilities of large models, their results often suffer from misinformation and hallucinations. This uncertainty in prediction brings huge challenges to its widespread application.

Evaluation of model interpretation

##The evaluation indicators explained by the model include plausibility, faithfulness, stability, robustness, etc. The paper mainly talks about two widely concerned dimensions: 1) rationality to humans; 2) fidelity to the internal logic of the model.
The evaluation of traditional fine-tuning model explanations has mainly focused on local explanations. Plausibility often requires a measurement evaluation of model interpretations versus human-annotated interpretations against designed standards. Fidelity pays more attention to the performance of quantitative indicators. Since different indicators focus on different aspects of the model or data, there is still a lack of unified standards for measuring fidelity. Evaluation based on prompting model interpretation requires further research.

Future Research Challenges

1. Lack of effective correct explanation. The challenge comes from two aspects: 1) the lack of standards for designing effective explanations; 2) the lack of effective explanations leads to a lack of support for the evaluation of explanations.

##2. The origin of the emergence phenomenon is unknown. The exploration of the emergence ability of large models can be carried out from the perspective of the model and the data respectively. From the perspective of the model, 1) the model structure that causes the emergence phenomenon; 2) the minimum model scale and complexity with super performance in cross-language tasks . From a data perspective, 1) the subset of data that determines a specific prediction; 2) the relationship between emergent ability and model training and data contamination; 3) the impact of the quality and quantity of training data on the respective effects of pre-training and fine-tuning.

3. The difference between fine-tuning paradigm and prompting paradigm. The different performances of in-distribution and out-of-distribution mean different reasoning methods. 1) The differences in reasoning paradigms when data are in-distribution; 2) The sources of differences in model robustness when data are distributed differently.

#4. Shortcut learning problem for large models. Under the two paradigms, the problem of shortcut learning of the model exists in different aspects. Although large models have abundant data sources, the problem of shortcut learning is relatively alleviated. Elucidating the formation mechanism of shortcut learning and proposing solutions are still important for the generalization of the model.

#5. Attention redundancy. The redundancy problem of attention modules widely exists in both paradigms. The study of attention redundancy can provide a solution for model compression technology.

# 6. Safety and ethics. Interpretability of large models is critical to controlling the model and limiting the negative impact of the model. Such as bias, unfairness, information pollution, social manipulation and other issues. Building explainable AI models can effectively avoid the above problems and form ethical artificial intelligence systems.

The above is the detailed content of Analyzing the interpretability of large models: a review reveals the truth and answers doubts. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:机器之心. If there is any infringement, please contact admin@php.cn delete
Are You At Risk Of AI Agency Decay? Take The Test To Find OutAre You At Risk Of AI Agency Decay? Take The Test To Find OutApr 21, 2025 am 11:31 AM

This article explores the growing concern of "AI agency decay"—the gradual decline in our ability to think and decide independently. This is especially crucial for business leaders navigating the increasingly automated world while retainin

How to Build an AI Agent from Scratch? - Analytics VidhyaHow to Build an AI Agent from Scratch? - Analytics VidhyaApr 21, 2025 am 11:30 AM

Ever wondered how AI agents like Siri and Alexa work? These intelligent systems are becoming more important in our daily lives. This article introduces the ReAct pattern, a method that enhances AI agents by combining reasoning an

Revisiting The Humanities In The Age Of AIRevisiting The Humanities In The Age Of AIApr 21, 2025 am 11:28 AM

"I think AI tools are changing the learning opportunities for college students. We believe in developing students in core courses, but more and more people also want to get a perspective of computational and statistical thinking," said University of Chicago President Paul Alivisatos in an interview with Deloitte Nitin Mittal at the Davos Forum in January. He believes that people will have to become creators and co-creators of AI, which means that learning and other aspects need to adapt to some major changes. Digital intelligence and critical thinking Professor Alexa Joubin of George Washington University described artificial intelligence as a “heuristic tool” in the humanities and explores how it changes

Understanding LangChain Agent FrameworkUnderstanding LangChain Agent FrameworkApr 21, 2025 am 11:25 AM

LangChain is a powerful toolkit for building sophisticated AI applications. Its agent architecture is particularly noteworthy, allowing developers to create intelligent systems capable of independent reasoning, decision-making, and action. This expl

What are the Radial Basis Functions Neural Networks?What are the Radial Basis Functions Neural Networks?Apr 21, 2025 am 11:13 AM

Radial Basis Function Neural Networks (RBFNNs): A Comprehensive Guide Radial Basis Function Neural Networks (RBFNNs) are a powerful type of neural network architecture that leverages radial basis functions for activation. Their unique structure make

The Meshing Of Minds And Machines Has ArrivedThe Meshing Of Minds And Machines Has ArrivedApr 21, 2025 am 11:11 AM

Brain-computer interfaces (BCIs) directly link the brain to external devices, translating brain impulses into actions without physical movement. This technology utilizes implanted sensors to capture brain signals, converting them into digital comman

Insights on spaCy, Prodigy and Generative AI from Ines MontaniInsights on spaCy, Prodigy and Generative AI from Ines MontaniApr 21, 2025 am 11:01 AM

This "Leading with Data" episode features Ines Montani, co-founder and CEO of Explosion AI, and co-developer of spaCy and Prodigy. Ines offers expert insights into the evolution of these tools, Explosion's unique business model, and the tr

A Guide to Building Agentic RAG Systems with LangGraphA Guide to Building Agentic RAG Systems with LangGraphApr 21, 2025 am 11:00 AM

This article explores Retrieval Augmented Generation (RAG) systems and how AI agents can enhance their capabilities. Traditional RAG systems, while useful for leveraging custom enterprise data, suffer from limitations such as a lack of real-time dat

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools