Graph Geometry Interaction Learning

Authors: Shichao Zhu, Shirui Pan, Chuan Zhou, Jia Wu, Yanan Cao, Bin Wang

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Promising experimental results are presented for five benchmark datasets on node classification and link prediction tasks.
Researcher Affiliation Collaboration 1Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China 2Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China 3School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China 4Faculty of Information Technology, Monash University, Melbourne, Australia 5Faculty of Science and Engineering, Macquarie University, Sydney, Australia 6Xiaomi AI Lab, Beijing, China
Pseudocode No The paper describes its method through text and mathematical equations, but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Code and data are available at https://github.com/Cherise Zhu/GIL.
Open Datasets Yes For node classification and link prediction tasks, we consider five benchmark datasets: Disease, Airport, Cora, Pubmed and Citeseer. ... In the citation network datasets: Cora, Pubmed and Citeseer [27] ... The first two datasets are derived by [19].
Dataset Splits Yes In node classification, we use the 30/10/60 percent splits for training, validation and test on Disease dataset, 70/15/15 percent splits for Airport, and standard splits in GCN [2] on citation network datasets.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running experiments.
Software Dependencies No The paper mentions using 'Adam optimizer', 'geometric deep learning extension library provided by [32, 33]' which points to 'Geoopt' and 'PyTorch Geometric', but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes In our experiments, we closely follow the parameter settings in [19] and optimize hyperparameters on the same dataset split for all baselines. ... All methods use the following training strategy, including the same random seeds for initialization, and the same early stopping on validation set with 100 patience epochs. ... We evaluated performance on the test set over 10 random parameter initializations. ... The same 16-dimension and hyper-parameter selection strategy are used for all baselines... The optimal number of hidden layers for other methods is obtained by grid search in [1, 2, 3]. The optimal L2 regularization with weight decay [1e-4, 5e-4, 1e-3] and dropout rate [0.0-0.6] are obtained by grid search for each method. ... We implemented GIL using the Adam optimizer [31]