search
HomeTechnology peripheralsAIProblems with the interpretability of neural networks: revisiting the critique of NNs from thirty years ago

1 Explainable AI (XAI)

As deep neural networks (DNN) are used to decide loan approvals, job applications, court bail approval, etc., which are closely related to people’s interests Or some life-or-death decisions (such as making a sudden stop on the highway), it is crucial to explain these decisions, rather than just produce a predictive score.

Research in explainable artificial intelligence (XAI) has recently focused on the concept of counterfactual examples. The idea is simple: first create some counterfactual examples with expected outputs and feed them into the original network; then, read the hidden layer units to explain why the network produced some other output. More formally:

"The fraction p is returned because the variable V has the value (v1, v2, ...) associated with it. If V has the value (v′1, v ′2, ...), and all other variables remain unchanged, the score p' will be returned."

The following is a more specific example:

"You were refused a loan because your annual income was £30,000. If your income was £45,000 you would get a loan."

However , a paper by Browne and Swift [1] (hereafter B&W) recently showed that counterfactual examples are only slightly more meaningful adversarial examples generated by performing small and unobservable perturbations on the input , resulting in the network misclassifying them with high confidence.

Furthermore, counterfactual examples "explain" what some features should be to get correct predictions, but "do not open the black box"; that is, they do not explain how the algorithm works. of. The article goes on to argue that counterfactual examples do not provide a solution for interpretability and that "without semantics there is no explanation".

In fact, the article even makes a stronger suggestion:

1) We either find a way to extract what is assumed to exist in Semantics in the hidden layers of the network, either

2) admit we failed.

And Walid S. Saba himself is pessimistic about (1). In other words, he regretfully admits our failure. The following are his reasons.

2 The "Ghost" of Fodor and Pylyshyn

Although the public fully agrees with B&W's view that "there is no explanation without semantics", but The hope of interpreting the semantics of hidden layer representations in deep neural networks to produce satisfactory explanations for deep learning systems has not been realized, the authors believe, for reasons outlined more than thirty years ago by Fodor and Pylyshyn [2] .

Walid S. Saba then argued: Before explaining where the problem lies, we need to note that purely extensional models (such as neural networks) cannot account for systematicity and Compositionality is modeled because they do not recognize symbolic structures with derivable syntax and corresponding semantics.

Thus, representations in neural networks are not really "symbols" that correspond to anything interpretable - but rather distributed, correlated, and continuous numerical values ​​that are themselves does not mean anything that can be explained conceptually.

In simpler terms, the subsymbolic representations in neural networks do not themselves refer to anything that humans can conceptually understand (the hidden units themselves cannot represent any metaphysical meaning Object). Rather, it is a set of hidden units that together typically represent some salient feature (e.g., a cat's whiskers).

But this is exactly why neural networks cannot achieve interpretability, namely because the combination of several hidden features is undeterminable - once the combination is completed (through some linear combination function) , a single unit is lost (we will show below).

3 Interpretability is "reverse reasoning", DNN cannot do reverse reasoning

The author has discussed why Fodor and Pylyshyn reached the conclusions is that NN cannot model systematic (and therefore interpretable) inferences [2].

In symbolic systems, there are well-defined combinatorial semantic functions that calculate the meaning of compound words based on the meanings of their constituents. But this combination is reversible - that is, one can always get the (input) components that produce that output, and precisely because in a symbolic system one can Access a "syntactic structure" that contains a map of how components are assembled. None of this is true in NN. Once vectors (tensors) are combined in a NN, their decomposition cannot be determined (the ways in which vectors (including scalars) can be decomposed are infinite!)

To illustrate why this is a problem At the core, let us consider B&W's proposal to extract semantics in DNNs to achieve interpretability. B&W's recommendation is to follow these guidelines:

The input image is labeled "Architecture" because the hidden neuron 41435 that normally activates the hubcap has an activation value of 0.32. If the activation value of hidden neuron 41435 is 0.87, the input image will be labeled "car".

To understand why this does not lead to interpretability, just note that requiring neuron 41435 to have an activation of 0.87 is not enough. For simplicity, assume that neuron 41435 has only two inputs, x1 and x2. What we have now is shown in Figure 1 below:

重温三十年前对于 NN 的批判:神经网络无法实现可解释 AI

重温三十年前对于 NN 的批判:神经网络无法实现可解释 AI

######## federal ###############Now assuming that our activation function f is the popular ReLU function, it can produce an output of z = 0.87. This means that for the values ​​of x1, x2, w1 and w2 shown in the table below, an output of 0.87 is obtained. ########################Table Note: Multiple input methods can produce a value of 0.87############# ## Looking at the table above, it is easy to see that there are countless linear combinations of x1, x2, w1 and w2 that will produce an output of 0.87. The important point here is that compositionality in NNs is irreversible, so meaningful semantics cannot be captured from any neuron or any collection of neurons. ############In keeping with B&W's slogan "No semantics, no explanation", we state that no explanation can ever be obtained from NN. In short, there is no semantics without compositionality, there is no explanation without semantics, and DNN cannot model compositionality. This can be formalized as follows: ############1. Without semantics, there is no explanation[1] 2. Without reversible compositionality, there is no semantics[2]######### ###3. Compositionality in DNN is irreversible[2]############=> DNN cannot be explained (without XAI)############End . ############By the way, the fact that compositionality in DNNs is irreversible has consequences besides not being able to produce interpretable predictions, especially when higher-level reasoning is required. Fields such as natural language understanding (NLU). ############In particular, such a system really cannot explain how a child can learn how to interpret an infinite number of sentences from only templates like (### ### ###), Because "John", "neighbor girl", "the boy who always comes here wearing a T-shirt", etc. are all possible instantiations of ###, as well as "classic rock", "fame", "Mary's grandma", " Running on the beach," etc. are all possible instances of ###. ############Because such systems have no "memory" and their composition cannot be reversed, in theory they need countless examples to learn this simple structure. [Editor’s note: This point was precisely Chomsky’s questioning of structural linguistics, and thus initiated the transformational generative grammar that has influenced linguistics for more than half a century. 】######

Finally, the author emphasizes that more than thirty years ago, Fodor and Pylyshyn [2] raised a criticism of NN as a cognitive architecture - they showed why NN cannot build systematicness, productivity and composition. model, all of which is necessary to talk about anything "semantic" - and this is a compelling criticism that has never been perfectly answered.

As the need to solve the problem of explainability in AI becomes critical, we must revisit that classic paper as it shows how statistical pattern recognition can be equated with artificial intelligence The limits of progress.

The above is the detailed content of Problems with the interpretability of neural networks: revisiting the critique of NNs from thirty years ago. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Are You At Risk Of AI Agency Decay? Take The Test To Find OutAre You At Risk Of AI Agency Decay? Take The Test To Find OutApr 21, 2025 am 11:31 AM

This article explores the growing concern of "AI agency decay"—the gradual decline in our ability to think and decide independently. This is especially crucial for business leaders navigating the increasingly automated world while retainin

How to Build an AI Agent from Scratch? - Analytics VidhyaHow to Build an AI Agent from Scratch? - Analytics VidhyaApr 21, 2025 am 11:30 AM

Ever wondered how AI agents like Siri and Alexa work? These intelligent systems are becoming more important in our daily lives. This article introduces the ReAct pattern, a method that enhances AI agents by combining reasoning an

Revisiting The Humanities In The Age Of AIRevisiting The Humanities In The Age Of AIApr 21, 2025 am 11:28 AM

"I think AI tools are changing the learning opportunities for college students. We believe in developing students in core courses, but more and more people also want to get a perspective of computational and statistical thinking," said University of Chicago President Paul Alivisatos in an interview with Deloitte Nitin Mittal at the Davos Forum in January. He believes that people will have to become creators and co-creators of AI, which means that learning and other aspects need to adapt to some major changes. Digital intelligence and critical thinking Professor Alexa Joubin of George Washington University described artificial intelligence as a “heuristic tool” in the humanities and explores how it changes

Understanding LangChain Agent FrameworkUnderstanding LangChain Agent FrameworkApr 21, 2025 am 11:25 AM

LangChain is a powerful toolkit for building sophisticated AI applications. Its agent architecture is particularly noteworthy, allowing developers to create intelligent systems capable of independent reasoning, decision-making, and action. This expl

What are the Radial Basis Functions Neural Networks?What are the Radial Basis Functions Neural Networks?Apr 21, 2025 am 11:13 AM

Radial Basis Function Neural Networks (RBFNNs): A Comprehensive Guide Radial Basis Function Neural Networks (RBFNNs) are a powerful type of neural network architecture that leverages radial basis functions for activation. Their unique structure make

The Meshing Of Minds And Machines Has ArrivedThe Meshing Of Minds And Machines Has ArrivedApr 21, 2025 am 11:11 AM

Brain-computer interfaces (BCIs) directly link the brain to external devices, translating brain impulses into actions without physical movement. This technology utilizes implanted sensors to capture brain signals, converting them into digital comman

Insights on spaCy, Prodigy and Generative AI from Ines MontaniInsights on spaCy, Prodigy and Generative AI from Ines MontaniApr 21, 2025 am 11:01 AM

This "Leading with Data" episode features Ines Montani, co-founder and CEO of Explosion AI, and co-developer of spaCy and Prodigy. Ines offers expert insights into the evolution of these tools, Explosion's unique business model, and the tr

A Guide to Building Agentic RAG Systems with LangGraphA Guide to Building Agentic RAG Systems with LangGraphApr 21, 2025 am 11:00 AM

This article explores Retrieval Augmented Generation (RAG) systems and how AI agents can enhance their capabilities. Traditional RAG systems, while useful for leveraging custom enterprise data, suffer from limitations such as a lack of real-time dat

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.