Towards Deepening Graph Neural Networks: A GNTK-based Optimization Perspective

Authors: Wei Huang, Yayong Li, weitao Du, Richard Xu, Jie Yin, Ling Chen, Miao Zhang

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental evaluation consistently confirms using our proposed method can achieve better results compared to relevant counterparts with both infinite-width and finite-width.
Researcher Affiliation Academia Wei Huang University of Technology Sydney weihuang.uts@gmail.com Yayong Li University of Technology Sydney yayong.li@student.uts.edu.au Weitao Du Northeastern University weitao.du@northwestern.edu Jie Yin The University of Sydney jie.yin@sydney.edu.au Richard Yi Da Xu & Ling Chen University of Technology Sydney {Yi Da.Xu,ling.chen}@uts.edu.au Miao Zhang Aalborg University miaoz@cs.aau.dk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions using and adapting existing open-source implementations (GNTK and Drop Edge) but does not provide a specific link or explicit statement for the open-source release of their *own* developed methodology (Critical Drop Edge) or adapted code.
Open Datasets Yes Details of four real-world graph datasets used for node classification are summarized in Table 3 in Appendix F.1. ... The Cora dataset consists of 2,708 scientific publications... The Citeseer dataset consists of 3,312 scientific publications... The Pubmed Diabetes dataset consists of 19,717 scientific publications... The Physics dataset consists of 34,493 authors as nodes...
Dataset Splits Yes Table 3: Details of Datasets... Train/Val/Test (e.g., Cora: 0.05/0.18/0.37)
Hardware Specification Yes All experiments are conducted on two Nvidia Quadro RTX 6000 GPUs.
Software Dependencies No The paper mentions using PyTorch and refers to implementations from Du et al. (2019b) and Rong et al. (2019) but does not specify version numbers for PyTorch or any other software libraries used.
Experiment Setup Yes We conduct experiments on a GCN (Kipf & Welling, 2017), where we apply a width of 1, 000 at each hidden layer and the depth ranging from 2 to 29. Figure 2 shows the training and test accuracy on Cora, Citesser and Pubmed after 300 training epochs. ... For C-Drop Edge, we perform a random hyper-parameter search and fix the edge preserving rate as ρ(G) = |V | 2|E|.