EIGNN: Efficient Infinite-Depth Graph Neural Networks

Authors: Juncheng Liu, Kenji Kawaguchi, Bryan Hooi, Yiwei Wang, Xiaokui Xiao

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The empirical results of comprehensive experiments on synthetic and real-world datasets show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance. In this section, we demonstrate that EIGNN can effectively learn representations which have the ability to capture long-range dependencies in graphs. Therefore, EIGNN achieves state-of-the-art performance for node classification task on both synthetic and real-world datasets. Specifically, we conduct experiments1 to compare EIGNN with representative baselines on seven graph datasets (Chains, Chameleon, Squirrel, Cornell, Texas, Wisconsin, and PPI)
Researcher Affiliation Academia National University of Singapore {juncheng,kenji,bhooi}@comp.nus.edu.sg wangyw_seu@foxmail.com, xkxiao@nus.edu.sg
Pseudocode No No explicit pseudocode or algorithm blocks found.
Open Source Code Yes 1The implementation can be found at https://github.com/liu-jc/EIGNN
Open Datasets Yes Specifically, we conduct experiments1 to compare EIGNN with representative baselines on seven graph datasets (Chains, Chameleon, Squirrel, Cornell, Texas, Wisconsin, and PPI), where Chains is a synthetic dataset used in Gu et al. [10]. Chameleon, Squirrel, Cornell, Texas, and Wisconsin are real-world datasets with a single graph each [21] while PPI is a real-world dataset with multiple graphs [11]. Detailed descriptions of datasets and settings about experiments can be found in Appendix C.
Dataset Splits Yes For training/validation/testing split, we consider 5%/10%/85% which is similar with the semi-supervised node classification setting [14].
Hardware Specification No The paper mentions training times but does not provide specific hardware details such as CPU/GPU models or memory specifications.
Software Dependencies No The paper mentions PyTorch [20] but does not provide specific version numbers for software dependencies or libraries.
Experiment Setup Yes The hyper-parameter setting and details about baselines implementation can be found in Appendix C.2.