A Structural Representation Learning for Multi-relational Networks
Authors: Lin Liu, Xin Li, William K. Cheung, Chengcheng Xu
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on real multi-relational network datasets of Word Net and Freebase demonstrate the efficacy of the proposed model when compared with the state-of-the-art embedding methods. |
| Researcher Affiliation | Academia | Lin Liu1, Xin Li1 ,William K. Cheung2 and Chengcheng Xu1 1 BJ ER Center of HVLIP&CC, School of Comp. Sci., Beijing Institute of Technology, Beijing, China 2 Department of Computer Science, Hong Kong Baptist University, Hong Kong, China |
| Pseudocode | No | The paper describes the model and inference mathematically, but does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not mention providing open-source code for the methodology described. |
| Open Datasets | Yes | To evaluate the performance of the proposed multi-relational network embedding (MNE), we employ two well-known benchmark datasets, namely, WN18 and FB15K which are extracted from the real-world multi-relational networks Word Net [Miller, 1995] and Freebase [Bollacker et al., 2008] respectively. |
| Dataset Splits | No | The paper states: 'The experiments are evaluated using 80/20 rule for the train-test split.' It does not provide specific details on a validation split or cross-validation. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers. |
| Experiment Setup | No | The paper mentions using logistic regression and negative sampling for optimization, and that 'unif and bern to sample negative instances are used for the embedding learning' and 'K is the number of the negative samples', but it does not specify concrete hyperparameter values such as learning rates, batch sizes, or specific values for K. |