AutoGEL: An Automated Graph Neural Network with Explicit Link Information

Authors: Zhili Wang, Shimin DI, Lei Chen

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on benchmark data sets demonstrate the superiority of Auto GEL on several tasks.
Researcher Affiliation Academia Zhili WANG, Shimin DI , Lei CHEN Department of Computer Science and Engineering The Hong Kong University of Science and Technology {zwangeo,sdiaa,leichen}@connect.ust.hk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. The methods are described in narrative text.
Open Source Code Yes Code is available at https://github.com/zwangeo/Auto GEL
Open Datasets Yes For LP task on homogeneous graphs, we follow [Zhang and Chen, 2018, Li et al., 2020] to utilize the datasets: NS [Newman, 2006], Power [Watts and Strogatz, 1998], Router [Spring et al., 2002], C.ele [Watts and Strogatz, 1998], USAir [Batagelj and Mrvar, 2009], Yeast [Von Mering et al., 2002] and PB [Ackland et al., 2005]. As for the LP task on multi-relational graphs, we mainly adopt benchmark knowledge graphs (KGs), FB15k-237 [Toutanova and Chen, 2015] and WN18RR [Dettmers et al., 2018].
Dataset Splits No The paper mentions 'test triples' for the LP task evaluation but does not provide specific percentages or sample counts for training, validation, and test splits for all datasets within the main text. It states 'More details about data sets, hyper-parameter settings and search space designs are introduced in Appendix A.1.1, A.1.2 and A.1.3, respectively' but does not explicitly promise split details in the appendix.
Hardware Specification Yes All the experiments are performed using one single RTX 2080 Ti GPU.
Software Dependencies No The paper mentions 'Pytorch framework' but does not specify a version number for it or other software dependencies.
Experiment Setup No The paper states: 'More details about data sets, hyper-parameter settings and search space designs are introduced in Appendix A.1.1, A.1.2 and A.1.3, respectively.' The main text does not contain specific hyperparameter values or detailed training configurations.