On Completing Sparse Knowledge Base with Transitive Relation Embedding
Authors: Zili Zhou, Shaowu Liu, Guandong Xu, Wu Zhang3125-3132
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on three public datasets against seven baselines showed the merits of TRE in terms of knowledge base completion accuracy as well as computational complexity. |
| Researcher Affiliation | Academia | Zili Zhou,1,2 Shaowu Liu,1 Guandong Xu,*,1 Wu Zhang2 1Advanced Analytics Institute, University of Technology Sydney 2School of Computer Engineering and Science, Shanghai University Zili.Zhou@student.uts.edu.au, Shaowu.Liu@uts.edu.au, Guandong.Xu@uts.edu.au, wzhang@shu.edu.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., specific repository link, explicit code release statement) for the source code of the methodology described. |
| Open Datasets | Yes | We test the performance of these methods on several widely used KB datasets, including FB15K and WN18. We also construct an extremely sparse dataset by extracting subset from entire DBpedia project, we call this dataset DBP in experiment. |
| Dataset Splits | No | The paper mentions 'training dataset' and 'testing set' but does not explicitly provide specific percentages, sample counts, or detailed methodology for dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details with version numbers (e.g., library names with version numbers like Python 3.8, CPLEX 12.4). |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or explicit training configurations. |