Composition-based Multi-Relational Graph Convolutional Networks
Authors: Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, Partha Talukdar
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our proposed method on multiple tasks such as node classification, link prediction, and graph classification, and achieve demonstrably superior results. |
| Researcher Affiliation | Academia | 1Indian Institute of Science, 2Carnegie Mellon University, 3Columbia University |
| Pseudocode | No | No pseudocode or algorithm block explicitly labeled as such. |
| Open Source Code | Yes | The source code of COMPGCN and datasets used in the paper have been made available at http://github.com/malllabiisc/Comp GCN. |
| Open Datasets | Yes | In our experiments, we utilize FB15k-237 (Toutanova & Chen, 2015) and WN18RR (Dettmers et al., 2018) datasets for evaluation. ... we evaluate COMPGCN on MUTAG (Node) and AM (Ristoski & Paulheim, 2016) datasets. ... We evaluate on 2 bioinformatics dataset: MUTAG (Graph) and PTC (Yanardag & Vishwanathan, 2015). A summary statistics of the datasets used is provided in Appendix A.2 |
| Dataset Splits | Yes | For selecting the best model we perform a hyperparameter search using the validation data over the values listed in Table 8. Node Classification: Following Schlichtkrull et al. (2017), we use 10% training data as validation for selecting the best model for both the datasets. Graph Classification: Similar to Yanardag & Vishwanathan (2015); Xu et al. (2019), we report the mean and standard deviation of validation accuracies across the 10 folds cross-validation. |
| Hardware Specification | No | The paper does not specify any particular hardware details such as GPU models, CPU types, or memory used for running the experiments. |
| Software Dependencies | No | For all the tasks, we used COMPGCN build on Py Torch geometric framework (Fey & Lenssen, 2019). This mentions a framework but does not provide specific version numbers for PyTorch or PyTorch Geometric. |
| Experiment Setup | Yes | For evaluation, 200-dimensional embeddings for node and relation embeddings are used. For selecting the best model we perform a hyperparameter search using the validation data over the values listed in Table 8. For training link prediction models, we use the standard binary cross entropy loss with label smoothing Dettmers et al. (2018). ... we restrict the number of hidden units to 32. We use cross-entropy loss for training our model. ... training is done using Adam optimizer (Kingma & Ba, 2014) and Xavier initialization (Glorot & Bengio, 2010) is used for initializing parameters. |