Ensemble Multi-Relational Graph Neural Networks

Authors: Yuling Wang, Hao Xu, Yanhua Yu, Mengdi Zhang, Zhenhao Li, Yuji Yang, Wei Wu

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on four benchmark datasets well demonstrate the effectiveness of the proposed model.
Researcher Affiliation Collaboration Yuling Wang1,2 , Hao Xu2 , Yanhua Yu1 , Mengdi Zhang2 , Zhenhao Li1 , Yuji Yang2 and Wei Wu 2 1Beijing University of Posts and Telecommunications 2Meituan
Pseudocode Yes The pseudocode of En MP layer is shown in appendix A. Algorithm 1 Relational Coefficients Learning
Open Source Code Yes Code and appendix are at https://github.com/tuzibupt/EMR.
Open Datasets Yes The following four real-world heterogeneous datasets in various fields are utilized and can be divided into two categories: i) the node type and edge type are both heterogeneous (DBLP [Fu et al., 2020], ACM [Lv et al., 2021]). ii) the node type is homogeneous but the edge type is heterogeneous (MUTAG [Schlichtkrull et al., 2018], BGS [Schlichtkrull et al., 2018]).
Dataset Splits No We conduct 10 runs on all datasets with the fixed training/validation/test split for all experiments. The paper states a fixed split but does not provide specific percentages or sample counts for the validation set, which is necessary for reproducibility.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No We implement EMR-GNN based on Pytorch. The paper mentions PyTorch but does not specify its version number or any other software dependencies with their versions, which is required for reproducibility.
Experiment Setup Yes For f (X; W) and gθ( ), we choose one layer MLP for DBLP and ACM, and linear layers for MUTAG and BGS. We conduct 10 runs on all datasets with the fixed training/validation/test split for all experiments. More implementation details can be seen in appendix B.3.