search
HomeTechnology peripheralsAIHKU's large open source graph basic model OpenGraph: strong generalization ability, forward propagation to predict new data

The data starvation problem in the field of graph learning has new tricks that can alleviate it!

OpenGraph, a basic graph-based model specifically designed for zero-shot prediction on a variety of graph datasets.

Chao Huang’s team, head of the Data Intelligence Laboratory at the University of Hong Kong, also proposed improvement and adjustment techniques for the model to improve the model’s adaptability to new tasks.

Currently, this work has been posted on GitHub.

Introducing data augmentation technology, this work mainly explores in-depth strategies to enhance the generalization ability of graph models (especially when there are significant differences between training and test data).

OpenGraph is a general graph structure model that performs forward propagation through propagation prediction to achieve zero-sample prediction of new data.

港大开源图基础大模型OpenGraph: 强泛化能力,前向传播预测全新数据

In order to achieve the goal, the team solved the following 3 challenges:

  • Token differences between data sets: Different graph data sets often have different graph token sets, and we need the model to be able to predict across data sets.
  • Node relationship modeling: When building a general graph model, it is crucial to effectively model node relationships, which is related to the scalability and efficiency of the model.
  • Data scarcity: Faced with the problem of data acquisition, we perform data enhancement through large language models to simulate complex graph structure relationships and improve the quality of model training.

Through a series of innovative methods, such as topology-aware BERT Tokenizer and anchor-based graph Transformer, OpenGraph effectively addresses the above challenges. Test results on multiple data sets demonstrate the model's excellent generalization ability and enable effective evaluation of the model's color generalization ability.

OpenGraph model

The OpenGraph model architecture mainly consists of 3 core parts:

  • Unified graph Tokenizer.
  • Extensible graph Transformer.
  • Knowledge distillation technology based on large language model.

First let’s talk about the unified graph Tokenizer.

In order to adapt to the differences in nodes and edges in different data sets, the team developed a unified graph Tokenizer, which normalizes graph data into a token sequence.

This process includes high-order adjacency matrix smoothing and topology-aware mapping.

High-order adjacency matrix smoothing uses the high-order power of the adjacency matrix to solve the problem of sparse connections, while topology-aware mapping converts the adjacency matrix into a node sequence and uses fast singular value decomposition (SVD) Minimize information loss and retain more graph structure information.

The second is the extensible graph Transformer.

After tokenization, OpenGraph uses the Transformer architecture to simulate the dependencies between nodes, and mainly uses the following technologies to optimize model performance and efficiency:

First, token sequence sampling, reducing model needs through sampling technology The number of relations processed thereby reduces the time and space complexity of training.

The second is the self-attention mechanism of anchor point sampling. This method further reduces the computational complexity and effectively improves the training efficiency and stability of the model through the information transfer between learning nodes in stages.

The last step is knowledge distillation of large language models.

In order to deal with the data privacy and category diversity issues faced when training general graph models, the team drew inspiration from the knowledge and understanding capabilities of large language models (LLM) and used LLM to generate various graph structure data.

This data enhancement mechanism effectively improves the quality and practicality of data by simulating the characteristics of real-world graphs.

The team also first generated a set of nodes adapted to the specific application, with each node having a textual description in order to generate edges.

When faced with large-scale node sets such as e-commerce platforms, researchers deal with this by subdividing nodes into more specific subcategories.

For example, from "electronic products" to specific "mobile phones", "laptops", etc., this process is repeated until the nodes are refined enough to be close to real instances.

The prompt tree algorithm subdivides nodes according to the tree structure and generates more detailed entities.

Start from a general category such as "product", gradually refine it to specific subcategories, and finally form a node tree.

As for edge generation, using Gibbs sampling, researchers form edges based on the generated set of nodes.

In order to reduce the computational burden, we do not directly traverse all possible edges through LLM. Instead, we first use LLM to calculate the text similarity between nodes, and then use a simple algorithm to determine the node relationship.

On this basis, the team introduced several technical adjustments:

  • Dynamic probability normalization: Through dynamic adjustment, the similarity is mapped to a probability range that is more suitable for sampling.
  • Node locality: Introduces the concept of locality and only establishes connections between local subsets of nodes to simulate network locality in the real world.
  • Graph topology mode injection: Use graph convolutional network to modify node representation to better adapt to graph structure characteristics and reduce distribution deviation.

The above steps ensure that the generated graph data is not only rich and diverse, but also close to the connection patterns and structural characteristics of the real world.

Experimental verification and performance analysis

It should be noted that this experiment focuses on training the OpenGraph model using a data set generated only by LLM, and testing it on a diverse real-life scenario data set, covering Node classification and link prediction tasks.

The experimental design is as follows:

Zero sample setting.

To evaluate OpenGraph's performance on unseen data, we train the model on a generated training set and then evaluate it on a completely different real-world test set. It ensures that the training and testing data have no overlap in nodes, edges and features.

Few sample settings.

Considering that it is difficult for many methods to effectively perform zero-sample prediction, we introduce a few-sample setting. After the baseline model is pre-trained on pre-training data, k-shot samples are used for fine-tuning.

Results on 2 tasks and 8 test sets show that OpenGraph significantly outperforms existing methods in zero-shot prediction.

Additionally, existing pre-trained models sometimes perform worse than models trained from scratch on cross-dataset tasks.

Study on the Impact of Graph Tokenizer Design

At the same time, the team explored how the design of graph Tokenizer affects model performance.

First of all, it was found through experiments that not smoothing the adjacency matrix (the smoothing order is 0) will significantly reduce the performance, indicating the necessity of smoothing.

The researchers then tried several simple topology-aware alternatives: one-hot encoded IDs across datasets, random mapping, and node degree-based representations.

Experimental results show that the performance of these alternatives is not ideal.

Specifically, ID representation across data sets is the worst, degree-based representation also performs poorly, and random mapping, although slightly better, has a significant performance gap compared with optimized topology-aware mapping.

港大开源图基础大模型OpenGraph: 强泛化能力,前向传播预测全新数据

Impact of data generation techniques

The team investigated the impact of different pre-training datasets on OpenGraph performance, including those generated using LLM-based knowledge distillation methods dataset, as well as several real datasets.

The pre-training data sets compared in the experiment include the data set after removing a certain technology from the team generation method, and 2 real data sets that have nothing to do with the test data set (Yelp2018 and Gowalla), 1 real data set (ML-10M) similar to the test data set.

The experimental results show that the generated data set shows good performance on all test sets; the removal of the three generation techniques significantly affects the performance, verifying the effectiveness of these techniques.

When training using real datasets that are independent of the test set (such as Yelp and Gowalla) Performance sometimes degrades, possibly due to distribution differences between different datasets.

The ML-10M dataset achieves the best performance on similar test datasets (such as ML-1M and ML-10M) , highlighting the similarity between the training and test datasets The importance of sex.

港大开源图基础大模型OpenGraph: 强泛化能力,前向传播预测全新数据

Research on Transformer sampling technology

In this part of the experiment, the research team explored two sampling techniques used in the graph Transformer module:

Token sequence sampling (Seq) and anchor sampling (Anc).

They conducted detailed ablation experiments on these two sampling methods to evaluate their specific impact on model performance.

港大开源图基础大模型OpenGraph: 强泛化能力,前向传播预测全新数据

Experimental results show that whether it is token sequence sampling or anchor point sampling, both can effectively reduce the space and time complexity of the model during the training and testing phases. This is especially important for processing large-scale graph data and can significantly improve efficiency.

From a performance perspective, token sequence sampling has a positive impact on the overall performance of the model. This sampling strategy optimizes the representation of the graph by selecting key tokens, thereby improving the model's ability to handle complex graph structures.

In contrast, experiments on the ddi dataset show that anchor sampling may have a negative impact on model performance. Anchor sampling simplifies the graph structure by selecting specific nodes as anchor points, but this method may ignore some key graph structure information, thus affecting the accuracy of the model.

In summary, although both sampling techniques have their advantages, in practical applications, the appropriate sampling strategy needs to be carefully selected based on specific data sets and task requirements.

Research Conclusion

This research aims to develop a highly adaptable framework that can accurately identify and parse complex topological patterns of various graph structures.

The researchers' goal is to significantly enhance the model's generalization ability in zero-shot graph learning tasks, including a variety of downstream applications, by fully leveraging the capabilities of the proposed model.

The model is built with the support of a scalable graph Transformer architecture and LLM-enhanced data augmentation mechanism to improve the efficiency and robustness of OpenGraph.

Through extensive testing on multiple standard datasets, the team demonstrated the model’s excellent generalization performance.

港大开源图基础大模型OpenGraph: 强泛化能力,前向传播预测全新数据

It is understood that as an initial attempt to build a graph-based model, in the future, the team's work will focus on increasing the automation capabilities of the framework, including automatically identifying noisy connections and conducting counterfactuals study.

At the same time, the team plans to learn and extract common and transferable patterns of various graph structures to further promote the application scope and effect of the model.

Reference link:

[1] Paper: https://arxiv.org/pdf/2403.01121.pdf.

[2] Source code library: https://github.com/HKUDS/OpenGraph.

The above is the detailed content of HKU's large open source graph basic model OpenGraph: strong generalization ability, forward propagation to predict new data. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]May 14, 2025 am 05:04 AM

ChatGPT is not accessible? This article provides a variety of practical solutions! Many users may encounter problems such as inaccessibility or slow response when using ChatGPT on a daily basis. This article will guide you to solve these problems step by step based on different situations. Causes of ChatGPT's inaccessibility and preliminary troubleshooting First, we need to determine whether the problem lies in the OpenAI server side, or the user's own network or device problems. Please follow the steps below to troubleshoot: Step 1: Check the official status of OpenAI Visit the OpenAI Status page (status.openai.com) to see if the ChatGPT service is running normally. If a red or yellow alarm is displayed, it means Open

Calculating The Risk Of ASI Starts With Human MindsCalculating The Risk Of ASI Starts With Human MindsMay 14, 2025 am 05:02 AM

On 10 May 2025, MIT physicist Max Tegmark told The Guardian that AI labs should emulate Oppenheimer’s Trinity-test calculus before releasing Artificial Super-Intelligence. “My assessment is that the 'Compton constant', the probability that a race to

An easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTAn easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTMay 14, 2025 am 05:01 AM

AI music creation technology is changing with each passing day. This article will use AI models such as ChatGPT as an example to explain in detail how to use AI to assist music creation, and explain it with actual cases. We will introduce how to create music through SunoAI, AI jukebox on Hugging Face, and Python's Music21 library. Through these technologies, everyone can easily create original music. However, it should be noted that the copyright issue of AI-generated content cannot be ignored, and you must be cautious when using it. Let’s explore the infinite possibilities of AI in the music field together! OpenAI's latest AI agent "OpenAI Deep Research" introduces: [ChatGPT]Ope

What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!May 14, 2025 am 05:00 AM

The emergence of ChatGPT-4 has greatly expanded the possibility of AI applications. Compared with GPT-3.5, ChatGPT-4 has significantly improved. It has powerful context comprehension capabilities and can also recognize and generate images. It is a universal AI assistant. It has shown great potential in many fields such as improving business efficiency and assisting creation. However, at the same time, we must also pay attention to the precautions in its use. This article will explain the characteristics of ChatGPT-4 in detail and introduce effective usage methods for different scenarios. The article contains skills to make full use of the latest AI technologies, please refer to it. OpenAI's latest AI agent, please click the link below for details of "OpenAI Deep Research"

Explaining how to use the ChatGPT app! Japanese support and voice conversation functionExplaining how to use the ChatGPT app! Japanese support and voice conversation functionMay 14, 2025 am 04:59 AM

ChatGPT App: Unleash your creativity with the AI ​​assistant! Beginner's Guide The ChatGPT app is an innovative AI assistant that handles a wide range of tasks, including writing, translation, and question answering. It is a tool with endless possibilities that is useful for creative activities and information gathering. In this article, we will explain in an easy-to-understand way for beginners, from how to install the ChatGPT smartphone app, to the features unique to apps such as voice input functions and plugins, as well as the points to keep in mind when using the app. We'll also be taking a closer look at plugin restrictions and device-to-device configuration synchronization

How do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesHow do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesMay 14, 2025 am 04:56 AM

ChatGPT Chinese version: Unlock new experience of Chinese AI dialogue ChatGPT is popular all over the world, did you know it also offers a Chinese version? This powerful AI tool not only supports daily conversations, but also handles professional content and is compatible with Simplified and Traditional Chinese. Whether it is a user in China or a friend who is learning Chinese, you can benefit from it. This article will introduce in detail how to use ChatGPT Chinese version, including account settings, Chinese prompt word input, filter use, and selection of different packages, and analyze potential risks and response strategies. In addition, we will also compare ChatGPT Chinese version with other Chinese AI tools to help you better understand its advantages and application scenarios. OpenAI's latest AI intelligence

5 AI Agent Myths You Need To Stop Believing Now5 AI Agent Myths You Need To Stop Believing NowMay 14, 2025 am 04:54 AM

These can be thought of as the next leap forward in the field of generative AI, which gave us ChatGPT and other large-language-model chatbots. Rather than simply answering questions or generating information, they can take action on our behalf, inter

An easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTAn easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTMay 14, 2025 am 04:50 AM

Efficient multiple account management techniques using ChatGPT | A thorough explanation of how to use business and private life! ChatGPT is used in a variety of situations, but some people may be worried about managing multiple accounts. This article will explain in detail how to create multiple accounts for ChatGPT, what to do when using it, and how to operate it safely and efficiently. We also cover important points such as the difference in business and private use, and complying with OpenAI's terms of use, and provide a guide to help you safely utilize multiple accounts. OpenAI

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use