Knowledge Transfer for Out-of-Knowledge-Base Entities : A Graph Neural Network Approach

Authors: Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, Yuji Matsumoto

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show the effectiveness of our proposed model in the OOKB setting. Additionally, in the standard KBC setting in which OOKB entities are not involved, our model achieves state-of-the-art performance on the Word Net dataset. 4 Experiments
Researcher Affiliation Collaboration 1Nara Institute of Science and Technology, Ikoma, Nara, Japan 2Recruit Institute of Technology
Pseudocode No The paper provides equations describing the propagation model (Eqs. 1-6) but does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes The code and dataset are available at https://github.com/takuo-h/GNN-for-OOKB.
Open Datasets Yes We used Word Net11 and Freebase13 [Socher et al., 2013] for evaluation. The data files were downloaded from http://cs.stanford.edu/people/danqi/. These datasets are subsets of two popular knowledge graphs, Word Net [Miller, 1995] and Freebase [Bollacker et al., 2008].
Dataset Splits Yes Table 1: Specifications of the triplet classification datasets. ... Training triplets 112,581 316,232 Validation triplets 5,218 11,816 Test triplets 21,088 47,466 ... For the validation triplets, we simply removed the triplets containing OOKB entities from the Word Net11 validation set.
Hardware Specification No The paper mentions 'To deal with the limited available computational resources (e.g., GPU memory)', but does not specify any particular GPU model, CPU, or other hardware components used for experiments.
Software Dependencies No We implemented our models using the neural network library Chainer (http://chainer.org/). The paper mentions Chainer but does not specify its version number or any other software dependencies with versions.
Experiment Setup Yes The step size of Adam was α1/(α2 k + 1.0), where k indicates the number of epochs performed, α1 = 0.01, and α2 = 0.0001. The mini-batch size was 5, 000 and the number of training epochs was 300 in every experiment. Moreover, the dimension of the embedding space was 200 in the standard triplet classification and 100 in other settings. ... margin value τ = 300 in the absolute-margin objective function (Eq. (8)).