On the Representation and Embedding of Knowledge Bases beyond Binary Relations

Authors: Jianfeng Wen, Jianxin Li, Yongyi Mao, Shini Chen, Richong Zhang

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate experimentally that m-Trans H outperforms Trans H by a large margin, thereby establishing a new state of the art. 4 Experiments
Researcher Affiliation Academia 1 State Key Laboratory of Software Development Environment, Beihang University 2 School of Computer Science and Engineering, Beihang University 3 School of Electrical Engineering and Computer Science, University of Ottawa
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper states: 'To inspire further research on the embedding of multi-fold relations, we have made our JF17K datasets publicly available.2 http://www.site.uottawa.ca/ yymao/JF17K'. This link provides access to datasets, not the source code for the methodology described in the paper.
Open Datasets Yes To inspire further research on the embedding of multi-fold relations, we have made our JF17K datasets publicly available.2 http://www.site.uottawa.ca/ yymao/JF17K. Dataset Gid was randomly split into training set GXid and testing set G?id where every fact ID entity in G?id was assured to appear in GXid.
Dataset Splits No The paper mentions 'training set GXid and testing set G?id' for its dataset splits but does not explicitly state the use of a separate 'validation set' or provide details on a validation split.
Hardware Specification Yes For example, at DIM=50, the training/testing times (in minutes) for Trans H:triple and m-Trans H:ID are respectively 105/229 and 52/135, on a 32-core Intel E5-2650 2.0GHz processor.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with their versions). It only mentions 'Stochastic Gradient Descent' as a training method.
Experiment Setup Yes Several choices of the dimension (DIM) of U are studied. In Trans H:triple and Trans H:inst, for each triple in GXsc, one random negative example is generated. In m-Trans H and m-Trans H:ID, for each instance in GX, random negative examples are generated. This way, the total number of negative examples used in every experiment is the same, assuring a fair comparison. Stochastic Gradient Descent is used for training, as is standard.