Locally Adaptive Translation for Knowledge Graph Embedding
Authors: Yantao Jia, Yuanzhuo Wang, Hailun Lin, Xiaolong Jin, Xueqi Cheng
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on two benchmark data sets demonstrate the superiority of the proposed method, as compared to the-state-of-the-art ones. We will conduct experiments on two tasks: link prediction (Bordes et al. 2013) and triple classification (Wang et al. 2014). |
| Researcher Affiliation | Academia | 1CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Science, Beijing, China 2Institute of Information Engineering, Chinese Academy of Science, Beijing, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described. |
| Open Datasets | Yes | The data sets we use are publicly available from two widely used knowledge graphs, Word Net (Miller 1995) and Freebase (Bollacker et al. 2008). For the data sets from Word Net, we employ WN18 used in (Bordes et al. 2014) and WN11 used in (Socher et al. 2013). For the data sets of Freebase, we employ FB15K used also in (Bordes et al. 2014) and FB13 used in (Socher et al. 2013). |
| Dataset Splits | Yes | The statistics of these data sets are listed in Table 2. All parameters are determined on the validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | The learning rate λ during the SGD process is selected among {0.1, 0.01, 0.001}, the embedding dimension d in {20, 50, 100}, the batch size B among {20, 120, 480, 1440, 4800}, and the parameter μ in Equation (3) in [0, 1]. The optimal settings are: λ = 0.001, d = 100, B = 1440, μ = 0.5 and taking L1 as dissimilarity on WN18; λ = 0.001, d = 50, B = 4800, μ = 0.5 and taking L1 as dissimilarity on FB15K. |