Multi-hop Attention Graph Neural Networks

Authors: Guangtao Wang, Rex Ying, Jing Huang, Jure Leskovec

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on node classification as well as the knowledge graph completion benchmarks show that MAGNA achieves state-ofthe-art results: MAGNA achieves up to 5.7% relative error reduction over the previous state-of-the-art on Cora, Citeseer, and Pubmed. MAGNA also obtains the best performance on a large-scale Open Graph Benchmark dataset. On knowledge graph completion MAGNA advances state-of-the-art on WN18RR and FB15k-237 across four different performance metrics.
Researcher Affiliation Collaboration 1JD AI Research 2Computer Science, Stanford University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper states: 'code will be released after publication.' This does not provide concrete access at the time of publication.
Open Datasets Yes We employ four benchmark datasets for node classification: (1) standard citation network benchmarks Cora, Citeseer and Pubmed [Sen et al., 2008; Kipf and Welling, 2016]; and (2) a benchmark dataset ogbn-arxiv on 170k nodes and 1.2m edges from the Open Graph Benchmark [Weihua Hu, 2020].
Dataset Splits Yes We follow the standard data splits for all datasets. Further information about these datasets is summarized in the Appendix. [...] We use the standard split for the benchmarks, and the standard testing procedure of predicting tail (head) entity given the head (tail) entity and relation type.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers.
Experiment Setup Yes For datasets Cora, Citeseer and Pubmed, we use 6 MAGNA blocks with hidden dimension 512 and 8 attention heads. For the large-scale ogbn-arxiv dataset, we use 2 MAGNA blocks with hidden dimension 128 and 8 attention heads.