Adapting Neural Link Predictors for Data-Efficient Complex Query Answering

Authors: Erik Arakelyan, Pasquale Minervini, Daniel Daza, Michael Cochez, Isabelle Augenstein

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments, CQDA produces significantly more accurate results than current state-of-the-art methods, improving from 34.4 to 35.1 Mean Reciprocal Rank values averaged across all datasets and query types while using 30% of the available training query types.
Researcher Affiliation Collaboration 1University of Copenhagen 2University of Edinburgh 3Vrije Universiteit Amsterdam 4University of Amsterdam 5Discovery Lab, Elsevier, The Netherlands
Pseudocode No The paper describes the method and steps in natural language but does not contain a formally labeled pseudocode or algorithm block.
Open Source Code Yes Source code and datasets are available at https://github.com/Edinburgh NLP/adaptive-cqd.
Open Datasets Yes Datasets To evaluate the complex query answering capabilities of our method, we use a benchmark comprising of 3 KGs: FB15K [Bordes et al., 2013], FB15K-237 [Toutanova and Chen, 2015] and NELL995 [Xiong et al., 2017].
Dataset Splits Yes Valid 1p 59,078 20,094 16,910 Others 8,000 5,000 4,000
Hardware Specification No The paper mentions 'GPU donations' in the acknowledgements but does not provide specific hardware details (e.g., specific GPU models, CPU types, or memory) used for running the experiments.
Software Dependencies No The paper mentions Compl Ex-N3 as the link prediction model and Adagrad as the optimizer, but it does not specify any software dependencies with version numbers (e.g., Python version, PyTorch version).
Experiment Setup Yes We train for 50, 000 steps using Adagrad as an optimiser and 0.1 as the learning rate. The beam-size hyper-parameter k was selected in k {512, 1024, . . . , 8192}, and the loss was selected across 1-vs-all [Lacroix et al., 2018] and binary cross-entropy with one negative sample.