Robust Embedding with Multi-Level Structures for Link Prediction

Authors: Zihan Wang, Zhaochun Ren, Chunyu He, Peng Zhang, Yue Hu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Results on WN18 and FB15k datasets show that our approach is effective in the standard link prediction task, significantly and consistently outperforming competitive baselines. Furthermore, robustness analysis on FB15k-237 dataset demonstrates that our proposed M-GNN is highly robust to sparsity and noise.
Researcher Affiliation Academia 1Institute of Information Engineering, Chinese Academy of Sciences 2School of Cyber Security, University of Chinese Academy of Sciences 3Shandong University
Pseudocode Yes Algorithm 1 Graph Coarsening. Require: Knowledge graph KG = (V, E, R); Ensure: Coarsened graph G0, G1, ..., Gk; 1: m 0 2: G0 KG 3: while |Em| threshold do 4: m m + 1; 5: Gm Edge Coarsen(Neighbor Coarsen(Gm 1)) 6: end while 7: return G0, G1, ..., Gk;
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We evaluate our link prediction algorithm on two commonly used datasets: FB15k, a subset of the multi-label knowledge base Freebase and WN18, a subset of Word Net featuring lexical relations between words. Both datasets are released by [Bordes et al., 2013]. ... [Toutanova and Chen, 2015] release the dataset FB15k-237 removing all inverse triplet pairs. Thus, We use FB15k-237 dataset for our extensive experiments.
Dataset Splits Yes Dataset # Ent # Rel # Train /Valid/Test WN18 40943 18 141442/5000/5000 FB15k 14951 1345 483142/50000/59071 FB15k-237 14541 237 272115/17535/20466 ... The hyperparameters in M-GNN are determined by the grid search on the validation set.
Hardware Specification No The paper mentions training models but does not specify any hardware details such as CPU/GPU models, memory, or specific cloud computing resources used for the experiments.
Software Dependencies No The paper mentions 'We train the models with Adam optimizer [Kingma and Ba, 2015]' but does not provide specific version numbers for any software libraries, frameworks, or environments used (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes The hyperparameters in M-GNN are determined by the grid search on the validation set. The ranges of the hyperparameters are manually set as follow: learning rate {0.01, 0.005, 0.003, 0.001}, dropout rate {0, 0.1, 0.2, 0.3,...,0.9}, embeddings size {100, 150, 200, 300}, regularization coefficient {0.01,0.05,0.1,0.5,1.0}, the number of negative samples {1,3,5,10} and ϵ = 0. For both FB15k and WN18 datasets, we use M-GNN with three GNN layers and all MLPs have two layers with the hidden unit number {10,50,100,200}. For the Compl Ex encoder, we treat complex vector Cd as real vector Rd 2 in the encoder. We train the models with Adam optimizer [Kingma and Ba, 2015].