


In 2019, New York University and Amazon Cloud Technology jointly launched the graph neural network framework DGL (Deep Graph Library). Now DGL 1.0 is officially released! DGL 1.0 summarizes the various needs for graph deep learning and graph neural network (GNN) technology in academia or industry in the past three years. From academic research on state-of-the-art models to scaling GNNs to industrial applications, DGL 1.0 provides a comprehensive and easy-to-use solution for all users to better take advantage of graph machine learning.
##DGL 1.0 for different scenarios solutions provided.
DGL 1.0 adopts a layered and modular design to meet various user needs. Key features of this release include:
- More than 100 out-of-the-box GNN model examples, more than 15 top-ranked ones on Open Graph Benchmark (OGB) Baseline model;
- More than 150 commonly used GNN modules, including GNN layers, data sets, graph data conversion modules, graph samplers, etc., which can be used to build new model architectures or GNN-based Solution;
- Flexible and efficient message passing and sparse matrix abstraction for developing new GNN modules;
- Multi-GPU and distributed Training capabilities support training on tens of billions of graphs.
#DGL 1.0 Technology stack diagram
One of the highlights of this version is the introduction of DGL-Sparse, a new programming interface that uses sparse matrices as the core programming abstraction. DGL-Sparse not only simplifies the development of existing GNN models such as graph convolutional networks, but also works with the latest models, including diffusion-based GNNs, hypergraph neural networks, and graph Transformers.
The release of DGL version 1.0 has aroused enthusiastic responses on the Internet. Scholars such as Yann Lecun, one of the three giants of deep learning, and Xavier Bresson, associate professor at the National University of Singapore, have all liked and forwarded it.
#In the following article, the author Two mainstream GNN paradigms are outlined, namely message passing view and matrix view. These paradigms can help researchers better understand the inner working mechanism of GNN, and the matrix perspective is also one of the motivations for the development of DGL Sparse.
Message passing view and matrix view
There is a saying in the movie "Arrival": "The language you use determines your The way you think and affects your view of things." This sentence also applies to GNN.
means that graph neural networks have two different paradigms. The first, called the message passing view, expresses the GNN model from a fine-grained, local perspective, detailing how messages are exchanged along edges and how node states are updated accordingly. The second is the matrix perspective. Since graphs have algebraic equivalence with sparse adjacency matrices, many researchers choose to express GNN models from a coarse-grained, global perspective, emphasizing operations involving sparse adjacency matrices and eigenvectors.
The message passing perspective reveals the connection between GNNs and the Weisfeiler Lehman (WL) graph isomorphism test, which also relies on aggregating information from neighbors. The matrix perspective understands GNN from an algebraic perspective, leading to some interesting discoveries, such as the over-smoothing problem.
In short, these two perspectives are indispensable tools for studying GNN. They complement each other and help researchers better understand and describe the nature and characteristics of GNN models. It is for this reason that one of the main motivations for the release of DGL 1.0 is to add support for the matrix perspective based on the existing message passing interface.
DGL Sparse: A sparse matrix library designed for graph machine learning
A new library called DGL Sparse has been added to DGL version 1.0 (dgl.sparse), together with the message passing interface in DGL, improves support for all types of graph neural network models. DGL Sparse provides sparse matrix classes and operations specifically for graph machine learning, making it easier to write GNNs from a matrix perspective. In the next section, the authors demonstrate several GNN examples, showing their mathematical formulations and corresponding code implementations in DGL Sparse.
Graph Convolutional Network
GCN is one of the pioneers of GNN modeling. GCN can be represented with both message passing view and matrix view. The following code compares the differences between these two methods in DGL.
#Implementing GCN using messaging API
##Use DGL Sparse to implement GCN
GNN based on graph diffusion
Graph diffusion is the process of spreading or smoothing node features or signals along edges. Many classic graph algorithms such as PageRank fall into this category. A series of studies have shown that combining graph diffusion with neural networks is an effective and efficient way to enhance model predictions. The following equation describes the core calculation of one of the more representative models, APPNP. It can be implemented directly in DGL Sparse.
Hypergraph is a generalization of graph where edges can connect any number of nodes (called hyperedges). Hypergraphs are particularly useful in scenarios where higher-order relationships need to be captured, such as co-purchasing behavior in e-commerce platforms, or co-authorship in citation networks. A typical feature of a hypergraph is its sparse correlation matrix, so hypergraph neural networks (HGNN) are often defined using sparse matrices. The following is a hypergraph convolutional network (Feng et al., 2018) and its code implementation.
Transformer model has become the most successful model architecture in natural language processing. Researchers are also beginning to extend Transformer to graph machine learning. Dwivedi et al. pioneered the idea of limiting all multi-head attention to connected pairs of nodes in the graph. This model can be easily implemented with only 10 lines of code using the DGL Sparse tool.
Key Features of DGL Sparse
Compared with sparse matrix libraries such as scipy.sparse or torch.sparse, the overall design of DGL Sparse is to serve graph machine learning, which includes the following key features:
- Automatic sparse format selection: DGL Sparse is designed so that users do not have to worry about choosing the correct data structure to store sparse matrices (also known as sparse formats). Users only need to remember that dgl.sparse.spmatrix creates a sparse matrix, and DGL will automatically select the optimal format internally based on the operator called;
- # Scalar or vector non-zero elements : Many GNN models learn multiple weights on the edges (such as the multi-head attention vectors demonstrated in the Graph Transformer example). To accommodate this situation, DGL Sparse allows non-zero elements to have vector shapes and extends common sparse operations such as sparse-dense-matrix multiplication (SpMM), etc. You can refer to the bspmm operation in the Graph Transformer example.
By leveraging these design features, DGL Sparse reduced code length on average compared to the previous implementation of the matrix view model using a message passing interface. 2.7 times . The simplified code also reduces the framework's overhead by 43%. In addition, DGL Sparse is compatible with PyTorch and can be easily integrated with various tools and packages in the PyTorch ecosystem. Get started with DGL 1.0
DGL 1.0 has been released on all platforms and can be easily installed using pip or conda. In addition to the examples introduced earlier, the first version of DGL Sparse also includes 5 tutorials and 11 end-to-end examples, all of which can be experienced directly in Google Colab without the need for local installation.To learn more about the new features of DGL 1.0, please refer to the author's release log. If you encounter any problems or have any suggestions or feedback while using DGL, you can also contact the DGL team through the Discuss forum or Slack.
The above is the detailed content of 10 lines of code to complete the graph Transformer, the graph neural network framework DGL ushered in version 1.0. For more information, please follow other related articles on the PHP Chinese website!

Curses首先出场的是 Curses[1]。CurseCurses 是一个能提供基于文本终端窗口功能的动态库,它可以: 使用整个屏幕 创建和管理一个窗口 使用 8 种不同的彩色 为程序提供鼠标支持 使用键盘上的功能键Curses 可以在任何遵循 ANSI/POSIX 标准的 Unix/Linux 系统上运行。Windows 上也可以运行,不过需要额外安装 windows-curses 库:pip install windows-curses 上面图片,就是一哥们用 Curses 写的 俄罗斯

相比大家都听过自动化生产线、自动化办公等词汇,在没有人工干预的情况下,机器可以自己完成各项任务,这大大提升了工作效率。编程世界里有各种各样的自动化脚本,来完成不同的任务。尤其Python非常适合编写自动化脚本,因为它语法简洁易懂,而且有丰富的第三方工具库。这次我们使用Python来实现几个自动化场景,或许可以用到你的工作中。1、自动化阅读网页新闻这个脚本能够实现从网页中抓取文本,然后自动化语音朗读,当你想听新闻的时候,这是个不错的选择。代码分为两大部分,第一通过爬虫抓取网页文本呢,第二通过阅读工

糟透了我承认我不是一个爱整理桌面的人,因为我觉得乱糟糟的桌面,反而容易找到文件。哈哈,可是最近桌面实在是太乱了,自己都看不下去了,几乎占满了整个屏幕。虽然一键整理桌面的软件很多,但是对于其他路径下的文件,我同样需要整理,于是我想到使用Python,完成这个需求。效果展示我一共为将文件分为9个大类,分别是图片、视频、音频、文档、压缩文件、常用格式、程序脚本、可执行程序和字体文件。# 不同文件组成的嵌套字典 file_dict = { '图片': ['jpg','png','gif','webp

长期以来,Python 社区一直在讨论如何使 Python 成为网页浏览器中流行的编程语言。然而网络浏览器实际上只支持一种编程语言:JavaScript。随着网络技术的发展,我们已经把越来越多的程序应用在网络上,如游戏、数据科学可视化以及音频和视频编辑软件。这意味着我们已经把繁重的计算带到了网络上——这并不是JavaScript的设计初衷。所有这些挑战提出了对新编程语言的需求,这种语言可以提供快速、可移植、紧凑和安全的代码执行。因此,主要的浏览器供应商致力于实现这个想法,并在2017年向世界推出

2017 年 Transformer 横空出世,由谷歌在论文《Attention is all you need》中引入。这篇论文抛弃了以往深度学习任务里面使用到的 CNN 和 RNN。这一开创性的研究颠覆了以往序列建模和 RNN 划等号的思路,如今被广泛用于 NLP。大热的 GPT、BERT 等都是基于 Transformer 构建的。Transformer 自推出以来,研究者已经提出了许多变体。但大家对 Transformer 的描述似乎都是以口头形式、图形解释等方式介绍该架构。关于 Tra

首先要说,聚类属于机器学习的无监督学习,而且也分很多种方法,比如大家熟知的有K-means。层次聚类也是聚类中的一种,也很常用。下面我先简单回顾一下K-means的基本原理,然后慢慢引出层次聚类的定义和分层步骤,这样更有助于大家理解。层次聚类和K-means有什么不同?K-means 工作原理可以简要概述为: 决定簇数(k) 从数据中随机选取 k 个点作为质心 将所有点分配到最近的聚类质心 计算新形成的簇的质心 重复步骤 3 和 4这是一个迭代过程,直到新形成的簇的质心不变,或者达到最大迭代次数

大家好,我是J哥。这个没有点数学基础是很难算出来的。但是我们有了计算机就不一样了,依靠计算机极快速的运算速度,我们利用微分的思想,加上一点简单的三角学知识,就可以实现它。好,话不多说,我们来看看它的算法原理,看图:由于待会要用pygame演示,它的坐标系是y轴向下,所以这里我们也用y向下的坐标系。算法总的思想就是根据上图,把时间t分割成足够小的片段(比如1/1000,这个时间片越小越精确),每一个片段分别构造如上三角形,计算出导弹下一个时间片走的方向(即∠a)和走的路程(即vt=|AC|),这时

集成GPT-4的Github Copilot X还在小范围内测中,而集成GPT-4的Cursor已公开发行。Cursor是一个集成GPT-4的IDE,可以用自然语言编写代码,让编写代码和聊天一样简单。 GPT-4和GPT-3.5在处理和编写代码的能力上差别还是很大的。官网的一份测试报告。前两个是GPT-4,一个采用文本输入,一个采用图像输入;第三个是GPT3.5,可以看出GPT-4的代码能力相较于GPT-3.5有较大能力的提升。集成GPT-4的Github Copilot X还在小范围内测中,而


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SublimeText3 English version
Recommended: Win version, supports code prompts!

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

SublimeText3 Mac version
God-level code editing software (SublimeText3)

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.