Multi-Domain Generalized Graph Meta Learning
Authors: Mingkai Lin, Wenzhong Li, Ding Li, Yizhou Chen, Guohao Li, Sanglu Lu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments based on four real-world graph domain datasets show that the proposed method significantly outperforms the state-of-the-art in multidomain graph meta learning tasks. |
| Researcher Affiliation | Academia | State Key Laboratory for Novel Software Technology, Nanjing University Nanjing, China mingkai@smail.nju.edu.cn, lwz@nju.edu.cn |
| Pseudocode | Yes | Algorithm 1: Meta Training for MD-Gram |
| Open Source Code | No | The paper does not provide a direct link or explicit statement about the availability of its source code. |
| Open Datasets | Yes | The experiments are based on four real-world networks from different graph domains: (1) Product [P] (Hu et al. 2020): The Ogbn-products from Open Graph Benchmark... (2) Yelp [Y] (Zeng et al. 2019): A social network... (3) Reddit [R] (Hamilton, Ying, and Leskovec 2017): A graph dataset... (4) Academic [A] (Hu et al. 2020): An academic citation network named ogbn-papers100M from Open Graph Benchmark. |
| Dataset Splits | Yes | We consider the few-shot setting for a link prediction task that at most 30% edges is known beforehand, fixed 10% for validation and predict the rest edges following the setting of (Bose et al. 2019; Huang and Zitnik 2020). |
| Hardware Specification | Yes | The experiments are implemented with Pytorch in Python 3.6.8 and conducted on a PC with Intel Xeon E52620 v2 2.10GHz CPU, Ge Force RTX 2070 8G GPU and 64GB memory, running the 64-bit Cent OS Linux 7.2. |
| Software Dependencies | No | The paper mentions 'Pytorch' and 'Python 3.6.8' but does not specify the version for Pytorch, nor does it list other key libraries with their specific version numbers. |
| Experiment Setup | Yes | The unified node feature dimension is d = 256; learning rates are α1 = 0.001, α2 = α3 = 0.005; iteration numbers are r = 20, l = 10; hyperparameter for weighted loss is λ = 1 . |