Contextual Parameter Generation for Knowledge Graph Link Prediction

Authors: George Stoica, Otilia Stretcu, Emmanouil Antonios Platanios, Tom Mitchell, Barnabás Póczos3000-3008

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply our method on two existing link prediction methods, including the current state-of-the-art, resulting in significant performance gains and establishing a new state-of-the-art for this task. 5 Experiments In this section, we empirically evaluate the performance of Co PER on several established link-prediction datasets.
Researcher Affiliation Academia George Stoica,* Otilia Stretcu,* Emmanouil Antonios Platanios,* Tom M. Mitchell, Barnab as P oczos Carnegie Mellon University 5000 Forbes Ave, Pittsburgh, Pennsylvania 15213 {gis, ostretcu, e.a.platanios, tom.mitchell, bapoczos}@cs.cmu.edu
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes All supplementary material along with code to reproduce our experiments can be accessed at: https://github.com/otiliastr/coper.
Open Datasets Yes We adopt the following datasets used in prior literature: Unified Medical Language Systems (UMLS) (Kok and Domingos 2007), Alyawarra Kinship, WN18RR (Dettmers et al. 2018), FB15k-237 (Toutanova and Chen 2015), and NELL-995 (Xiong, Hoang, and Wang 2017).
Dataset Splits Yes To keep our train/validation/test dataset partitions consistent with those of prior literature and ensure fair comparisons, we use the published datasets from Das et al. (2018) and Lin, Socher, and Xiong (2018).
Hardware Specification Yes We conduct all our experiments on a single Nvidia Titan X GPU.
Software Dependencies No The paper states that the code is in a repository but does not explicitly list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) in the text.
Experiment Setup Yes We choose the dropout parameters by performing a grid search between [0,1] based on the validation set Hits@1. Regarding the parameter generation module, we perform experiments using both glinear and g MLP. For the MLP, we use a single hidden layer with a Re LU activation and chose the number of hidden units by also performing a grid search between. We train our models using the binary cross-entropy loss function. For each positive training example, we sample 10 negatives... and use a label smoothing factor of 0.1.