Communicative Message Passing for Inductive Relation Reasoning
Authors: Sijie Mai, Shuangjia Zheng, Yuedong Yang, Haifeng Hu4294-4302
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show substantial performance gains in comparison to state-of-the-art methods on commonly used benchmark datasets with variant inductive settings. Evaluating Co MPILE and several previously proposed models on three inductive datasets: our model achieves state-of-the-art AUC-PR and Hits@10 across most of them. We also extract new inductive datasets by filtering out the triplets that have no enclosing subgraph to evaluate the inductive relation reasoning more accurately. |
| Researcher Affiliation | Academia | Sun Yat-sen University {maisj, zhengshj9}@mail2.sysu.edu.cn, {yangyd25, huhaif}@mail.sysu.edu.cn |
| Pseudocode | No | The paper does not include pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | For more details, please refer to our codes at: https://github.com/Tmac Mai/ Co MPILE Inductive Knowledge Graph. |
| Open Datasets | Yes | WN18RR (Dettmers et al. 2017), FB15k-237 (Toutanova et al. 2015), and Nell-995 (Xiong, Hoang, and Wang 2017) are commonly used datasets that are originally developed for transductive relation prediction. Teru et al. (Teru, Denis, and Hamilton 2020) extracts four versions of inductive datasets for each dataset. Each inductive dataset constitutes of train and test graphs, where the test graph contains entities that are not presented in train graph. |
| Dataset Splits | No | The paper describes train and test graphs and how negative triplets are sampled for evaluation. However, it does not explicitly mention a validation set or its split details, such as percentage or count of samples used. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. It only implies that the experiments were run on a computing setup without specifying the components. |
| Software Dependencies | No | The paper states: "We implement our model on Pytorch." However, it does not provide a specific version number for Pytorch or any other software dependencies, which is required for reproducibility. |
| Experiment Setup | Yes | We use Adam (Kingma and Ba 2015) as optimizer with learning rate being 0.001. The hop number h is set to 3 which is consistent with Gra IL. We train the model for four times and average the testing results to obtain the final performance. The number of iterations l is set to 3. |