On the Initialization of Graph Neural Networks

Authors: Jiahang Li, Yakun Song, Xiang Song, David Wipf

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct comprehensive experiments on 15 datasets to show that Virgo can lead to superior model performance and more stable variance at initialization on node classification, link prediction and graph classification tasks.
Researcher Affiliation Collaboration Jiahang Li 1 * Yakun Song 2 * Xiang Song 3 David Paul Wipf 4 1The Hong Kong Polytechnic University 2Shanghai Jiao Tong University 3Amazon AI 4Amazon Shanghai AI Lab.
Pseudocode No The paper describes mathematical derivations and experimental procedures but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link to its open-source code for the methodology described.
Open Datasets Yes For node classification, we choose three citation network datasets (Sen et al., 2008): cora, citeseer, pubmed, and three OGB (Hu et al., 2020) datasets: ogbn-arxiv, ogbnproteins and ogbn-products. For link prediction, we adopt four OGB datasets: ogbl-ddi, ogbl-collab, ogbl-citation2 and ogbl-ppa. For graph classification, we take three social network datasests imdb b, imdb m and collab from (Yanardag & Vishwanathan, 2015), and two OGB datasets ogbg-molhiv and ogbg-molpcba.
Dataset Splits Yes We iterate over multiple hyperparameter settings and search for the setting with the best mean on validation datasets. We then report the mean and standard deviation on testing datasets with the selected setting as the final results.
Hardware Specification Yes All experiments are conducted on a single Tesla T4 GPU with 16GB memory.
Software Dependencies No The paper mentions DGL (Wang et al., 2020) and PyTorch Geometric (PyG) (Fey & Lenssen, 2019) but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes We conduct hyperparameter sweep to search for best hyperparameter settings. To be specific, for each hyperparameter setting, we calculate the mean and standard deviation of 10 trials across different random seeds. We iterate over multiple hyperparameter settings and search for the setting with the best mean on validation datasets. We then report the mean and standard deviation on testing datasets with the selected setting as the final results. All experiments are conducted on a single Tesla T4 GPU with 16GB memory. Details of experimental setting are presented in Appendix B.