ER: Equivariance Regularizer for Knowledge Graph Completion

Authors: Zongsheng Cao, Qianqian Xu, Zhiyong Yang, Qingming Huang5512-5520

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we first introduce the experimental settings and show main results. Then we conduct some ablation experiments.
Researcher Affiliation Academia Zongsheng Cao 1,2, Qianqian Xu 3,*, Zhiyong Yang 4, Qingming Huang 3,4,5,6,* 1 State Key Laboratory of Information Security, Institute of Information Engineering, CAS, Beijing, China 2 School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China 3 Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, CAS, Beijing, China 4 School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, China 5 Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing, China 6 Peng Cheng Laboratory, Shenzhen, China caozongsheng@iie.ac.cn, xuqianqian@ict.ac.cn, yangzhiyong21@ucas.ac.cn, qmhuang@ucas.ac.cn
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Appendix1. 1https://github.com/Lion-ZS/ER
Open Datasets Yes We conduct experiments on three widely used benchmarks, WN18RR (Dettmers et al. 2017), FB15K-237 (Dettmers et al. 2017) and YAGO3-10 (Mahdisoltani, Biega, and Suchanek 2013)
Dataset Splits Yes We take Adagrad (Duchi, Hazan, and Singer 2011) as the optimizer in the experiment, and use grid search based on the performance of the validation datasets to choose the best hyperparameters.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models or memory specifications used for experiments.
Software Dependencies No The paper mentions 'Adagrad' as an optimizer but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes Specifically, we search learning rates in {0.5, 0.1, 0.05, 0.01, 0.005, 0.001}, and search regularization coefficients in {0.001, 0.005, 0.01, 0.05, 0.1, 0.5}. All models are trained for a maximum of 200 epochs.