


Preface
In the past few years, artificial intelligence technology headed by neural networks has deepened the understanding of different types of data. Excavation has profoundly changed human life and greatly promoted the process of social development [1]. As one of the most active research directions in the field of artificial intelligence, graph neural networks (GNNs) have been widely used in daily life such as personalized recommendations due to their excellent performance. They are also used in cutting-edge scientific fields such as new crown drug research and development. With the vigorous development of graph neural network technology, people have found that graph neural network systems with task performance as the single design goal still have problems such as vulnerability to malicious attacks. Therefore, people are increasingly eager to build reliable graph neural networks.
#In recent years, building a trustworthy artificial intelligence system has become a general consensus among countries around the world [2][3]. How to comprehensively establish a trustworthy graph neural network has become a major problem that needs to be solved urgently. This article is the latest review of trustworthy graph neural networks by the Monash team (Shirui Pan, Xingliang Yuan, Bang Wu, He Zhang) together with Hanghang Tong (UIUC) and Jian Pei (SFU, who will join Duke) (36 double-column pages, 299 documents).
This review starts from the research background and characteristics, proposes an open framework of trustworthy graph neural network, and focuses on the "trustworthy graph neural network" Believe in the six dimensions of GNN (robustness, explainability, privacy, fairness, accountability, environmental well-being) and its technical methods. At the same time, this review explores the interactive relationships between different trustworthiness dimensions, proposes future research directions for trustworthy graph neural networks, and draws a detailed and comprehensive technical roadmap for establishing trustworthy graph neural networks.
Review name: Trustworthy Graph Neural Networks: Aspects, Methods and Trends
Full text link: https://arxiv.org/pdf/2205.07424. pdf
Github: https://github.com/Radical3-HeZhang/Awesome-Trustworthy-GNNs
1 Introduction
Graphs are a kind of graphics with extremely strong representation ability. Data types, by describing the characteristics of entities and depicting the relationships between entities, have been widely used to describe data in many fields such as biology, chemistry, physics, linguistics, and social sciences. In recent years, the vigorous development of graph neural network technology has revolutionized the performance of various graph computing tasks and promoted its widespread application in real life.
In daily life, graph neural networks can provide users with personalized search and service in consumer applications such as information streaming media, online shopping, and social software by considering the interactive relationship between users and user/service content. Recommended services. In the frontiers of science, by using graph data to represent complex systems, researchers can use graph neural networks to discover the hidden patterns behind the motion of celestial bodies. By applying it to fake news detection and COVID-19 drug development, graph neural networks have greatly improved the well-being of our society.
Although researchers have designed methods to further improve the performance of graph neural networks from many perspectives (such as self-supervised learning, improving model depth, etc.), in some key areas, task performance is not the design of graph neural networks. the only goal. For example, anomaly detection systems based on graph neural networks need to be robust to malicious attacks, credit scoring systems based on graph neural networks should not reject loan applications due to factors such as age and gender of users, and drug discovery applications based on graph neural networks Researchers should be provided with a full explanation of their results.
Based on the above needs, people are increasingly eager for graph neural network-based systems to be credible. Against this background, this review aims to summarize the latest progress of "Trustworthy GNNs", provide a technical roadmap for relevant researchers and practitioners, and provide a basis for future research and development of trustworthy GNNs. Provide direction for industrial development.
The main contributions of this review are: 1) It describes the trustworthy graph neural network with an open framework that contains many trustworthy dimensions, and proposes the application of graph neural network and other common artificial intelligence technologies (such as CNN) in Typical differences in trustworthiness research; 2) A comprehensive and comprehensive summary of existing methods for different trustworthiness dimensions of graph neural networks; 3) Proposed that the relationship between different trustworthiness dimensions is important for building trustworthy graph neural networks The network system is crucial, and the existing research work is summarized from both the method and effectiveness levels; 4) By taking the concept of trustworthy graph neural network as a whole, potential future research directions are proposed.
2 Graph neural network and trustworthiness
In order to facilitate readers’ understanding, this article first introduces the following core concepts.
Graph data: Graph is generally composed of node set
and edge set
. The number of nodes in this graph is
, and the number of edges between nodes is
. Given graph
, the corresponding topology can be represented by the adjacency matrix
, where
represents the connection relationship between node
and node
. That is, if nodes
and
are connected to each other, then
, otherwise
. If the nodes in the graph
have attributes, a feature matrix
can be used to describe these attribute information. Therefore, the graph
can also be represented as
.
Graph neural networks (GNNs): Graph neural networks are a general term for a series of neural networks that can be used for computing tasks on graph data (such as node classification, link prediction, graph classification). A typical operation step in graph neural networks is message passing. During the message propagation process, the graph neural network updates the representation of the current node by aggregating the information of all neighbor nodes. On the basis of combining with other operations (such as nonlinear excitation), the graph neural network can calculate the corresponding data representation after multiple representation update iterations.
Trustworthiness: Trustworthy is used to describe a system that is worthy of being trusted. It describes the trust relationship between the trust initiator (the trustor) and the trust receiver (the trustee). . In the context of trusted graph neural networks, the trust receiver (the trustee) is the graph neural network system, and the trust initiator (the trustor) can be users, developers, regulatory authorities or even the entire society.
Trustworthy GNNs are defined as graph neural networks that take into account both trustworthiness and excellent performance. These trustworthy dimensions include but are not limited to the robustness, interpretability, privacy, and Fairness, responsibility and well-being. The original definition is as follows: “In this survey, we define trustworthy GNNs as competent GNNs that incorporate core aspects of trustworthiness, including robustness, explainability, privacy, fairness, accountability, well-being, and other trust-oriented characteristics in the context of GNNs. ”
3 Review Framework
This review mainly introduces the research background, the definition of trustworthy graph neural network, the definitions of different trustworthiness dimensions, measurement and research differences, and the relationship with existing reviews in the first chapter. and its main contributions. In Chapter 2, it introduces the basic concepts and computational tasks of graph neural networks. In Chapters 3 to 8, it starts from robustness, explainability, privacy, fairness, accountability, and environmental well-being respectively. Six aspects introduce and summarize typical technical methods and discuss future research directions. In Chapter 9, this review summarizes the complex relationship between the above six credibility dimensions from both the method and effectiveness levels. Finally, Chapter 10 of the review considers the trustworthy graph neural network as a whole, and proposes five directions for future research and industrialization in order to comprehensively construct a trustworthy graph neural network system.
4 Robustness
Robustness refers to the ability of a graph neural network to maintain stable prediction results when faced with interference. The prediction results of graph neural networks can be affected by a variety of perturbations (especially various attacks on graph neural networks). This brings severe challenges to the application of graph neural networks in scenarios involving personal and property safety, such as fraud detection in banking financial systems and traffic prediction and planning for autonomous driving systems. Therefore, research on robustness is an indispensable key link in trustworthy graph neural networks.
This review summarizes the current related work on the robustness of graph neural networks, and introduces the classification of adversarial attacks and defenses and some typical methods. Among them, the classification of adversarial attacks is derived from the analysis of the attack threat model (threat model), as shown in the figure below; the classification of defenses is more based on the specific execution phase of the technology (target phase).
In introducing the different categories of methods, the authors discuss the differences between the characteristics of these methods and those used in other common artificial techniques (such as CNN). For example, different classifications and name variations for specific types of perturbation operations and attack goals.
In addition, the author also discusses and summarizes the applicability of these attack and defense methods. For example, the author compared and analyzed different types of defense methods from three perspectives: stage of application, modularity, and deployment compatibility.
Finally, this review proposes two future research directions based on the current status quo, namely developing standardized robustness evaluations and improving the scalability of existing defense methods (defence scalability). .
5 Explainability (Explainability)
Explainability refers to the ability to make the prediction results of the graph neural network understandable by humans. If its predictions cannot be understood by people, then people will not trust graph neural networks. The lack of trust will further limit its application in scenarios involving fairness (such as credit risk prediction), information security (such as chip design), and life safety (such as autonomous driving). Therefore, a trustworthy graph neural network system needs to provide explanations for its predictions.
After introducing basic concepts such as explanation forms and categories of explanation methods, this review divides the work involving the interpretability of graph neural networks into self-explanation (interpretable GNNs) and post-hoc explainers. Two categories. Self-explanation (Interpretable GNNs) mainly includes contribution estimation, introduction of interpretable module, embedding prototype learning and rationale generation methods. Post-hoc explainers mainly include gradient/feature-based methods, perturbation-based methods, surrogate methods, and decomposition methods. ), generation methods, and other methods.
After introducing the principles of typical methods, this review makes a comprehensive comparison of these methods, as shown in the following table:
In addition, this review covers self-explanation and post-explanation (interpretability and explainability), background knowledge required to provide explanations (white/grey/black-box knowledge), reasoning principles for obtaining explanations (reasoning rationale), and other limitations. The current work is summarized in four aspects: other limitations. Finally, this review proposes that establishing strictly model-agnostic methods and evaluation benchmarks for real applications are two directions for future research on the interpretability of graph neural networks. .
6 Privacy
Privacy is also a trustworthy dimension that cannot be ignored when building a trustworthy graph neural network. In the process of building and maintaining a trusted graph neural network system, sensitive and private information such as the model itself or graph data is at risk of being leaked. Therefore, this review first summarizes the current research related to privacy data leakage, and then introduces various privacy protection methods.
In response to the issue of privacy leakage, this review starts with the current threat model of privacy attacks. It first introduces the goals and capabilities of potential threats, and then introduces model extraction attack and membership inference attack. , model inversion attack (model inversion attack), these three common privacy attacks and potential privacy leakage risks in other scenarios.
Then, this review introduces the four types of federated learning, differential privacy, insusceptible training, and security computation in the graph neural network scenario. Privacy protection technology. In addition, after classifying and introducing the above technologies, the author also discussed their specific application scenarios and the trade-offs between privacy protection, model accuracy, and implementation efficiency brought about by the end of privacy protection.
At the end of the chapter, in view of the current status of current research, the author believes that the current defense against privacy risks (leakage from gradient) and privacy attacks is The research on attacks) is still short-term and needs further attention and exploration in the future.
7 Fairness
By protecting the key interests of vulnerable groups or individuals, a fair system can win people’s trust. A fair graph neural network system means that its prediction results exclude bias against certain groups or individuals. Currently, graph neural networks mainly complete graph computing tasks in a data-driven manner. However, the message propagation mechanism in graph neural networks may further amplify biases already present in the data. Since then, due to factors such as personal preferences or behavioral biases, people will further deepen the bias in graph data during their interaction with graph neural network services.
Introduced basic concepts such as fairness definition (group fairness, individual fairness, counterfactual fairness), usage stages (pre-processing, in-processing, post-processing) Finally, this review divides the current methods for fairness research in graph neural networks into fair representation learning methods and fair prediction enhancement methods. After introducing the basic principles of these methods, a comprehensive comparison of these methods is performed, as shown in the table below.
Finally, this review proposes to explore the definition and evaluation of fairness, its influence on task performance, and the explanation of unfairness. Source (revealing unfairness) is a research direction that needs to be focused on in future fairness research on graph neural networks.
8 Accountability
With the emergence of more and more extensive application scenarios and complex system structures, individuals, enterprises and governments Institutions have put forward higher requirements for effective accountability in trustworthy graph neural networks. In recent years, enterprises and government agencies in China, the United States, and Europe have put forward their own plans and guidance on how to build an accountability framework for artificial intelligence. Based on the above content, this review summarizes three requirements for the graph neural network accountability framework:
(1) Reasonable assessment and certification processes should be designed and accompanied by the entire graph Development and operation cycle of neural network system;
(2) The assurance of auditability of the development and operation process should be ensured;
(3) Sufficient coordination and feedback mechanisms should be established to facilitate human intervention in the system (adjusting) , remediating and other measures, and can punish inappropriate behavior.
Then, this review introduces two major categories of work currently available for building accountability frameworks in trustworthy graph neural network systems: benchmarking and security evaluation.
This review introduces the research on benchmarking analysis according to the different development stages of graph neural networks—model design (architecture design), model training (model training), and model validation (model validation). For security assessment, the author mainly introduces related research on verifying system integrity (integrity verification), and introduces the verification of data integrity (data integrity) and program integrity (procedure integrity) according to different verification objects.
Finally, this review proposes three research directions for the accountability of trustworthy graph neural networks. First, provide more violation detection for violations of different natures; second, cover the entire process of program integrity testing and data integrity testing of all components; third, continue to improve the auditability and auditability of the system. Establish more coordination and feedback mechanisms.
9 Environmental well-being
Trusted graph neural networks should conform to the social values of the environment in which they are deployed. Currently, global warming is a major environmental problem that human society urgently needs to solve. Achieving the ambitious goal of carbon neutrality requires the joint efforts of all walks of life. In order to reduce the environmental impact of graph neural network systems, after introducing related metrics such as the number of nodes per joule, this review summarizes various methods to improve the efficiency of graph neural networks.
(1) Scalable graph neural network and efficient data communication: With the explosive growth of graph data, large-scale data sets pose challenges to the efficient operation of graph neural networks. In order to meet this challenge, current technologies mainly include sampling methods, scalable architectures, industrial applications, efficient data communication, etc.
(2) Model compression technology: With the development of technology, researchers have proposed deeper and more complex graph neural network models to improve their performance. However, the scale of these models limits their deployment on edge computing devices with limited computing resources. Therefore, model compression technology is an effective way to solve this challenge. Related technologies include knowledge distillation, model pruning, reducing parameter size, model quantisation, etc.
(3) Development framework and accelerator: The irregularity of graph data, the alternation of sparse and dense calculations in the model, and the diversity of models and applications make the graph neural network system need to use a specially designed framework. and accelerator to improve its efficiency. In order to solve this problem, current methods mainly include software frameworks (SW frameworks), hardware accelerators (HW accelerators), efficiency bottleneck analysis (analysing the efficiency bottleneck), software and hardware co-design (SW-HW co-design), etc.
Finally, this review proposes that the exploration of efficient GNNs and the study of accelerators for GNNs are two future research directions that will promote the well-being of the graph neural network environment. .
10 The relationship between different credibility aspects
The current research on promoting the credibility of graph neural networks mainly focuses on the above six dimensions One of the reviews, this review proposes that the relationship between the above six credibility dimensions cannot be ignored when constructing a trusted graph neural network, and summarizes this from the following two perspectives:
1) From the trusted graph How the methods from one aspect of trustworthy GNNs are adapted to address objectives in other aspects.
2) Why advancing one aspect of trustworthy GNNs can promote or inhibit other aspects (why advancing one aspect of trustworthy GNNs can promote or inhibit other aspects).
11 Future Research Directions
Aiming at potential research hotspots, this review treats the trustworthy graph neural network as a whole and analyzes the limitations of current methods. In order to fill the current research gaps and promote the industrialization of trustworthy graph neural networks, this review proposes the following five research directions:
A. Embracing trustworthy design concepts (shift to trustworthy GNNs)
Building a trustworthy graph neural network requires researchers and graph neural network practitioners to fully embrace the concept of trustworthiness. When designing a graph neural network, not only must its task performance be considered, but also the concept of trustworthiness must be introduced into the graph neural network. in design philosophy. Some existing work has taken into account both interpretability and fairness in design, which has greatly improved the credibility of graph neural networks. In addition, addressing a series of open issues faced in the move to trustworthy graph neural networks, such as balancing and trade-offs between different trustworthiness dimensions (such as robustness in autonomous driving and environmental well-being) in specific applications, is also A challenging research direction.
B. Exploring other aspects of trustworthy GNNs
Trustworthy graph neural networks actually contain more content than the six dimensions introduced in this review. For example, generalization is also considered an important dimension of trustworthy systems. Some current research explores the relationship between extrapolation of graph neural networks and the activation functions they use. These works enrich the connotation of trustworthiness and promote the construction of trustworthy graph neural networks. In addition, the review proposed that properly handling the design principles related to trusted systems (such as the "New Generation Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence" issued by the National New Generation Artificial Intelligence Governance Professional Committee) are also important for trustworthy graph neural networks. important research content in future development.
C. Studying diversified relations
This review only touches on part of the complex relationships between different dimensions of trustworthy graph neural networks. Exploring other interrelationships, such as explainability and fairness, is critical to fully understanding and building trustworthy graph neural network systems. Furthermore, these relationships are not only complex but exist on multiple levels. For example, counterfactual fairness and robustness are conceptually similar. Therefore, exploring the interrelationships between different dimensions of trustworthy graph neural networks from different levels such as concepts, methods, and effectiveness is also a promising research direction.
D. Design model-agnostic methods
Currently, many methods to improve the credibility of graph neural networks require the use of specially designed graph neural network architectures. These methods will not work if the target network infrastructure cannot be accessed or modified (such as using a cloud service). This greatly reduces the usefulness of these credibility enhancement methods in real-world scenarios. In contrast, model-agnostic methods can be flexibly applied to graph neural network systems in a plug-and-play manner. In addition, such methods can also be combined in the form of functional modules. Therefore, designing a model-agnostic approach will greatly improve its practicality and facilitate the construction of trustworthy graph neural networks.
E. Establishing a technology ecosystem for trustworthy GNNs
As a booming field, the development of trustworthy graph neural networks cannot be separated from the support of the technology ecosystem. The technology ecosystem here includes but is not limited to toolkits, datasets, metrics, and pipelines. Due to the inherent characteristics of graph data, some current toolkits (tools) such as IBM's AI360 may not be directly used to evaluate graph neural networks. For example, the existence of edges between nodes breaks the independent and identically distributed (IID) assumption on the nodes, which leads to the need to consider the interdependence between nodes when studying the fairness of graph neural networks. In addition, due to the diversity of application scenarios, building a trusted graph neural network also requires supporting technical facilities such as data sets, metrics, evaluation standards, and software platforms suitable for different tasks and scenarios. Therefore, establishing the corresponding technology ecosystem is a key step in the research and industrialization of trustworthy graph neural networks.
The above is the detailed content of How to build a trustworthy GNN? The latest review is here! Trustworthy Graph Neural Networks: Dimensions, Methods, Trends. For more information, please follow other related articles on the PHP Chinese website!

近年来,图神经网络(GNN)取得了快速、令人难以置信的进展。图神经网络又称为图深度学习、图表征学习(图表示学习)或几何深度学习,是机器学习特别是深度学习领域增长最快的研究课题。本次分享的题目为《GNN的基础、前沿和应用》,主要介绍由吴凌飞、崔鹏、裴健、赵亮几位学者牵头编撰的综合性书籍《图神经网络基础、前沿与应用》中的大致内容。一、图神经网络的介绍1、为什么要研究图?图是一种描述和建模复杂系统的通用语言。图本身并不复杂,它主要由边和结点构成。我们可以用结点表示任何我们想要建模的物体,可以用边表示两

当前主流的AI芯片主要分为三类,GPU、FPGA、ASIC。GPU、FPGA均是前期较为成熟的芯片架构,属于通用型芯片。ASIC属于为AI特定场景定制的芯片。行业内已经确认CPU不适用于AI计算,但是在AI应用领域也是必不可少。 GPU方案GPU与CPU的架构对比CPU遵循的是冯·诺依曼架构,其核心是存储程序/数据、串行顺序执行。因此CPU的架构中需要大量的空间去放置存储单元(Cache)和控制单元(Control),相比之下计算单元(ALU)只占据了很小的一部分,所以CPU在进行大规模并行计算

在我的世界(Minecraft)中,红石是一种非常重要的物品。它是游戏中的一种独特材料,开关、红石火把和红石块等能对导线或物体提供类似电流的能量。红石电路可以为你建造用于控制或激活其他机械的结构,其本身既可以被设计为用于响应玩家的手动激活,也可以反复输出信号或者响应非玩家引发的变化,如生物移动、物品掉落、植物生长、日夜更替等等。因此,在我的世界中,红石能够控制的机械类别极其多,小到简单机械如自动门、光开关和频闪电源,大到占地巨大的电梯、自动农场、小游戏平台甚至游戏内建的计算机。近日,B站UP主@

当风大到可以把伞吹坏的程度,无人机却稳稳当当,就像这样:御风飞行是空中飞行的一部分,从大的层面来讲,当飞行员驾驶飞机着陆时,风速可能会给他们带来挑战;从小的层面来讲,阵风也会影响无人机的飞行。目前来看,无人机要么在受控条件下飞行,无风;要么由人类使用遥控器操作。无人机被研究者控制在开阔的天空中编队飞行,但这些飞行通常是在理想的条件和环境下进行的。然而,要想让无人机自主执行必要但日常的任务,例如运送包裹,无人机必须能够实时适应风况。为了让无人机在风中飞行时具有更好的机动性,来自加州理工学院的一组工

1 什么是对比学习1.1 对比学习的定义1.2 对比学习的原理1.3 经典对比学习算法系列2 对比学习的应用3 对比学习在转转的实践3.1 CL在推荐召回的实践3.2 CL在转转的未来规划1 什么是对比学习1.1 对比学习的定义对比学习(Contrastive Learning, CL)是近年来 AI 领域的热门研究方向,吸引了众多研究学者的关注,其所属的自监督学习方式,更是在 ICLR 2020 被 Bengio 和 LeCun 等大佬点名称为 AI 的未来,后陆续登陆 NIPS, ACL,

本文由Cristian Bodnar 和Fabrizio Frasca 合著,以 C. Bodnar 、F. Frasca 等人发表于2021 ICML《Weisfeiler and Lehman Go Topological: 信息传递简单网络》和2021 NeurIPS 《Weisfeiler and Lehman Go Cellular: CW 网络》论文为参考。本文仅是通过微分几何学和代数拓扑学的视角讨论图神经网络系列的部分内容。从计算机网络到大型强子对撞机中的粒子相互作用,图可以用来模

AI面部识别领域又开辟新业务了?这次,是鉴别二战时期老照片里的人脸图像。近日,来自谷歌的一名软件工程师Daniel Patt 研发了一项名为N2N(Numbers to Names)的 AI人脸识别技术,它可识别二战前欧洲和大屠杀时期的照片,并将他们与现代的人们联系起来。用AI寻找失散多年的亲人2016年,帕特在参观华沙波兰裔犹太人纪念馆时,萌生了一个想法。这一张张陌生的脸庞,会不会与自己存在血缘的联系?他的祖父母/外祖父母中有三位是来自波兰的大屠杀幸存者,他想帮助祖母找到被纳粹杀害的家人的照

OTO 是业内首个自动化、一站式、用户友好且通用的神经网络训练与结构压缩框架。 在人工智能时代,如何部署和维护神经网络是产品化的关键问题考虑到节省运算成本,同时尽可能小地损失模型性能,压缩神经网络成为了 DNN 产品化的关键之一。DNN 压缩通常来说有三种方式,剪枝,知识蒸馏和量化。剪枝旨在识别并去除冗余结构,给 DNN 瘦身的同时尽可能地保持模型性能,是最为通用且有效的压缩方法。三种方法通常来讲可以相辅相成,共同作用来达到最佳的压缩效果。然而现存的剪枝方法大都只针对特定模型,特定任务,且需要很


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SublimeText3 Mac version
God-level code editing software (SublimeText3)

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

Zend Studio 13.0.1
Powerful PHP integrated development environment
