Home  >  Article  >  Technology peripherals  >  Why a cat? Explainable AI understands the recognition mechanism of CNN from a semantic level

Why a cat? Explainable AI understands the recognition mechanism of CNN from a semantic level

王林
王林forward
2023-04-09 23:11:011264browse

In recent years, CNN has been favored by researchers in various fields such as computer vision and natural language processing due to its excellent performance. However, CNN is a "black box" model, that is, the learning content and decision-making process of the model are difficult to extract and express in a way that humans can understand, which limits its prediction credibility and practical application. Therefore, the interpretability of CNN has received more and more attention. Researchers have tried to use feature visualization, network diagnosis and network architecture adjustment to assist in explaining the learning mechanism of CNN, thereby making this "black box" transparent. Make it easier for humans to understand, detect and improve their decision-making processes.

Recently, Research teams from Peking University, Eastern Institute of Technology, Southern University of Science and Technology, Pengcheng Laboratory and other institutions have proposed a semantic interpretable artificial intelligence (semantic The research framework of explainable AI (S-XAI) explains the learning mechanism of CNN from the semantic level, and takes the cat and dog binary classification problem as an example to vividly reveal how the model learns cats in the category sense. The concept of "what is a cat".

This research focuses on the common features learned by CNN from samples of the same category, and extracts human-understandable semantic concepts, providing semantics for CNN level of explanation. Based on this, the study first proposed the concept of "semantic probability" to characterize the occurrence probability of semantic elements in the sample. Experiments show that S-XAI can successfully extract common features and abstract ultra-realistic but identifiable semantic concepts in both binary and multi-classification tasks. It has broad application prospects in credibility assessment and semantic sample search.

The study was titled "Semantic interpretation for convolutional neural networks: What makes a cat a cat?" and was published in "Advanced Science" on October 10, 2022.

Why a cat? Explainable AI understands the recognition mechanism of CNN from a semantic level

##Paper link: https://onlinelibrary.wiley.com/doi/10.1002/advs.202204723

Code link: https://github.com/woshixuhao/semantic-explainable-AI

Model effect

Different from previous single sample visualization research, S-XAI can extract and visualize the common features of group samples, thereby Get global interpretability. Based on the further abstracted semantic space and calculated semantic probabilities, S-XAI can automatically generate human-understandable semantic explanations for the decision logic of the CNN and evaluate the credibility of the decision from a semantic level.

As shown in Figure 1, in the cat and dog classification problem, for pictures of the same cat from three angles, S-XAI automatically generates the corresponding semantic probability radar map and Explain the statement. Although the neural networks all identified these pictures as cats with a probability of more than 90%, S-XAI provided more interpretation information from the semantic probability, reflecting the differences between these pictures. For example, for the frontal image, S-XAI's explanation is "I am convinced that it is a cat, mainly because it has vivid eyes and nose, which are obviously cat's eyes and nose. At the same time, it has lifelike legs, which is a bit Like a cat's legs." This explanation shows a high degree of credibility. For the image from the side angle, S-XAI's explanation is "It's probably a cat, mainly because it has eyes, maybe cat eyes, but its legs are a little confusing." For the image from the back of the cat, None of the semantic probabilities are obvious, and S-XAI's interpretation is "It might be a cat, but I'm not sure." Meanwhile, for a picture of a dog, S-XAI's interpretation is: "I'm sure it's a cat." A dog, mainly because it has vivid eyes and a nose that are clearly those of a dog. Although its legs are a bit confusing."

In fact, if The dog's upper body is covered and only the legs are visible, making it difficult even for humans to tell whether it is a cat or a dog. It can be seen that the semantic explanation provided by S-XAI is more accurate and consistent with human cognition, allowing humans to better understand the category recognition logic of neural networks from the semantic level.

Why a cat? Explainable AI understands the recognition mechanism of CNN from a semantic level

Figure 1. Semantic probability radar chart and explanation statements automatically generated by S-XAI

At the same time, S-XAI also has broad application prospects in semantic sample search. As shown in Figure 2, when people need to filter out pictures with certain semantic features from a large number of pictures, S-XAI provides a fast and accurate way to filter through semantic probability. Considering that calculating semantic probabilities only involves the forward operation (i.e., prediction) of the neural network, the process is very fast.

Why a cat? Explainable AI understands the recognition mechanism of CNN from a semantic level

Figure 2. Semantic sample search example

In the study, the researchers also proved that S-XAI has good scalability in multi-classification tasks. As shown in Figure 3, taking the Mini-ImageNet data set (containing 100 animal categories) as an example, S-XAI can still extract clearly identifiable images from different categories of data (such as birds, snakes, crabs, fish, etc.) Common features and semantic space, and generate corresponding semantic explanations.

Why a cat? Explainable AI understands the recognition mechanism of CNN from a semantic level

Figure 3. Performance of S-XAI in multi-classification tasks.

Principles and Methods

Currently, common ideas for improving model interpretability are mainly divided into two categories: visualization and model intervention. Visualization methods visualize the feature maps, filters or heat maps inside the CNN to understand the features that the network pays attention to when facing a given sample. The limitation of this method is that it can only extract individual features from a single sample to obtain local interpretability, and cannot help people understand the overall decision-making logic of the model when facing the same type of data. The model intervention method integrates some existing highly interpretable models (such as tree models, etc.) into the architecture of the neural network to improve the interpretability of the model. Although this type of method has the advantage of global interpretability, it often requires retraining the model, resulting in high interpretation costs, which is not conducive to generalization and application.

Inspired by human cognitive models, in S-XAI, researchers adopted a new explanation strategy to explain CNN from the semantic level Category learning mechanism (Figure 4). In nature, objects of the same type often have certain similar common characteristics, which form an important basis for category cognition. For example, although cats have different shapes, they all share some common features (such as whiskers, noses, and eye-related features), which allows humans to quickly identify them as cats. In experiments, researchers found that CNN’s category learning mechanism is similar to humans.

Why a cat? Explainable AI understands the recognition mechanism of CNN from a semantic level

Figure 4. Semantic Interpretable Artificial Intelligence Research Framework

A technology called

row-centered sample compression was used in the study to extract samples from the same category from CNN learned common features. Different from traditional principal component analysis, row center sample compression reduces the dimensionality of the feature maps obtained by a large number of samples in the CNN in the sample space, thereby extracting a small number of principal components as common features learned by the CNN. In order to make the extracted common features clearer, the sample found the optimal superpixel combination through superpixel segmentation and genetic algorithm to reduce interference. The extracted common features are displayed visually (Figure 5).

Figure 5. Extraction path of common features

Taking the cat and dog classification problem on the VGG-19 network architecture as an example, for cats The different principal components extracted from the category data of dog and dog are shown in Figure 6. It can be clearly seen from the figure that different principal components exhibit identifiable features at different levels. It is obvious that the first principal component shows complete facial features, the second principal component shows scattered semantic concepts, such as beard, eyes and nose, etc., and the third principal component mainly shows the characteristics of fur. It is worth mentioning that the characteristics exhibited by these principal components are supernatural, that is, they do not belong to any sample, but reflect the common characteristics of all samples of the same category.

Why a cat? Explainable AI understands the recognition mechanism of CNN from a semantic level

Figure 6. Visualization results of different principal components extracted from cat and dog category data

Based on the extracted common features, the researchers masked the semantic information in the samples and compared the changes in the principal components to further separate the mixed semantic concepts. to extract the semantic vector corresponding to each semantic concept and abstract the semantic space. Here, the researchers used human-understood semantic concepts such as eyes, nose, etc., and visualized the abstracted semantic space. After successfully extracting the semantic space, the researchers defined the concept of "semantic probability" to characterize the occurrence probability of semantic elements in the sample, thus providing a quantitative analysis method for the semantic level explanation of CNN.

As shown in Figure 7, clearly identifiable semantic concepts (bright eyes, small nose) appear in the semantic space, which indicates that the semantic space is successfully extracted from the CNN comes out, showing the semantic information learned by CNN from categorical data. At the same time, researchers have found that CNN’s understanding of semantics is somewhat different from that of humans. The “semantics” it learns are not necessarily the “semantics” agreed upon by humans. It may even be that the semantics of neural networks are more efficient. For example, researchers found that for cats, CNN often treats the cat's nose and whiskers as a whole semantics, which may be more effective. At the same time, CNN has learned some connections between semantics. For example, a cat’s eyes and nose often appear at the same time. This aspect deserves further in-depth research.

Why a cat? Explainable AI understands the recognition mechanism of CNN from a semantic level

Figure 7. Semantic vectors extracted from CNN and visualized semantic space (top: cat eye space; bottom: cat Nose space)

Summary and Outlook

In summary, the semantically explainable artificial intelligence (S-XAI) proposed in the study extracts common features and Semantic space provides an explanation for the category recognition mechanism of CNN from a semantic level. This research framework can obtain certain global explanation capabilities without changing the CNN architecture. Since it does not involve network retraining, S-XAI has the advantage of faster response speed and has considerable applications in credibility assessment and semantic sample search. potential.

In essence, S-XAI is similar to knowledge discovery. Knowledge discovery aims to find function terms that reflect common physical laws from neural networks, while S-XAI aims to find semantic spaces that reflect common characteristics of samples from CNN. The core idea of ​​both is to find commonalities and represent them, so as to Possible to make human understandable.

The above is the detailed content of Why a cat? Explainable AI understands the recognition mechanism of CNN from a semantic level. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete