LD2: Scalable Heterophilous Graph Neural Network with Decoupled Embeddings
Authors: Ningyi Liao, Siqiang Luo, Xiang Li, Jieming Shi
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments to showcase that our model is capable of lightweight minibatch training on large-scale heterophilous graphs, with up to 15 speed improvement and efficient memory utilization, while maintaining comparable or better performance than the baselines. |
| Researcher Affiliation | Academia | Ningyi Liao Nanyang Technological University liao0090@e.ntu.edu.sg Siqiang Luo Nanyang Technological University siqiang.luo@ntu.edu.sg Xiang Li East China Normal University xiangli@dase.ecnu.edu.cn Jieming Shi Hong Kong Polytechnic University jieming.shi@polyu.edu.hk |
| Pseudocode | Yes | Algorithm 1 A2Prop: Approximate Adjacency Propagation |
| Open Source Code | Yes | Our code is available at: https://github.com/gdmnl/LD2. |
| Open Datasets | Yes | We mainly perform experiments on million-scale and above heterophilous datasets [26, 55] for the transductive node classification task, with the largest available graph wiki (m = 243M) included. |
| Dataset Splits | Yes | We leverage settings as per [26] such as the random train/test splits and the induced subgraph testing for GSAINT-sampling models. |
| Hardware Specification | Yes | Evaluations are conducted on a machine with 192GB RAM, two 28-core Intel Xeon CPUs (2.2GHz), and an NVIDIA RTX A5000 GPU (24GB memory). |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names with versions like Python 3.8 or PyTorch 1.9). |
| Experiment Setup | No | while parameter settings, further experiments, and subsequent discussions can be found in the Appendix. |