Instance-based Learning for Knowledge Base Completion

Authors: Wanyun Cui, Xingran Chen

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on various tasks confirmed the IBL model s effectiveness and interpretability.
Researcher Affiliation Academia Shanghai University of Finance and Economics1 University of Michigan2
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes 1We release code at https://github.com/chenxran/Instance Based Learning
Open Datasets Yes Datasets: We select four typical KBC datasets for evaluation, including FB15k-237, WN18RR, Kinship, and UMLS 2. For Kinship and UMLS, we use the training/validation/test division in [17].
Dataset Splits Yes For Kinship and UMLS, we use the training/validation/test division in [17].
Hardware Specification Yes All experiments can be run on a single Nvidia Tesla V100 GPU.
Software Dependencies No The paper mentions training techniques and refers to previous work but does not provide specific software dependencies with version numbers.
Experiment Setup No The paper states 'We illustrate the hyper-parameter search process in the Appendix,' which defers specific experimental setup details to supplementary material not included in the main text.