Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Expanding Holographic Embeddings for Knowledge Completion

Authors: Yexiang Xue, Yang Yuan, Zhitian Xu, Ashish Sabharwal

NeurIPS 2018 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For evaluation, we use the standard knowledge completion dataset FB15K [5]. and Table 1 summarizes our main results, with various method sorted by increasing HITS@10 performance.
Researcher Affiliation Collaboration Yexiang Xue Yang Yuan Zhitian Xu Ashish Sabharwal Dept. of Computer Science, Purdue University, West Lafayette, IN, USA Dept. of Computer Science, Cornell University, Ithaca, NY, USA Allen Institute for Arti๏ฌcial Intelligence (AI2), Seattle, WA, USA
Pseudocode No The paper describes computational operations and formulas, but does not provide any pseudocode or algorithm blocks.
Open Source Code No The paper mentions reimplementing HOLE using the framework of Shi and Weninger [23] and refers to their updated code at 'https://github.com/bxshi/ProjE', but does not explicitly provide a link to the HOLEX specific implementation or modified code.
Open Datasets Yes For evaluation, we use the standard knowledge completion dataset FB15K [5].
Dataset Splits Yes The facts are divided into 483,142 for training, 50,000 for validation, and 59,071 for testing.
Hardware Specification No The paper mentions 'on a 32-CPU machine on Google Cloud Platform' but does not provide specific details such as CPU model, GPU model, or memory.
Software Dependencies No The paper mentions using 'Tensor Flow' as a framework but does not specify any version numbers for TensorFlow or other software dependencies.
Experiment Setup Yes We use their updated code from https://github.com/bxshi/ProjE and new suggested parameters, reported here for completeness: max 50 iterations, learning rate 0.0005, and negative sampling weight 0.1. We increased the embedding dimension from 200 to 256 for consistency with our method and reduced batch size to 128