HousE: Knowledge Graph Embedding with Householder Parameterization

Authors: Rui Li, Jianan Zhao, Chaozhuo Li, Di He, Yiqi Wang, Yuming Liu, Hao Sun, Senzhang Wang, Weiwei Deng, Yanming Shen, Xing Xie, Qi Zhang

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, Hous E achieves new state-of-the-art performance on five benchmark datasets. Our code is available at https://github.com/anrep/Hous E. We conduct extensive experiments over five benchmarks and our proposal consistently outperforms SOTA baselines over all the datasets.
Researcher Affiliation Collaboration 1Department of Computer Science and Technology, Dalian University of Technology, Dalian, China 2University of Notre Dame, Indiana, USA 3Microsoft Research Asia, Beijing, China 4Peking University, Beijing, China 5Michigan State University, Michigan, USA 6Microsoft, Beijing, China 7Central South University, Changsha, China.
Pseudocode Yes Algorithm 1 Forward procedure of Hous E
Open Source Code Yes Our code is available at https://github.com/anrep/Hous E.
Open Datasets Yes Datasets. We evaluate our proposals on five widely-used benchmarks: WN18 (Bordes et al., 2013), FB15k (Bordes et al., 2013), WN18RR (Dettmers et al., 2018), FB15k237 (Toutanova & Chen, 2015) and YAGO3-10 (Mahdisoltani et al., 2015). Refer to Appendix F for more details.
Dataset Splits Yes Table 8. Statistics of five standard benchmarks. Dataset #entity #relation #training #validation #test WN18 40,943 18 141,442 5,000 5,000 FB15k 14,951 1,345 483,142 50,000 59,071 WN18RR 40,943 11 86,835 3,034 3,134 FB15k-237 14,541 237 272,115 17,535 20,466 YAGO3-10 123,182 37 1,079,040 5,000 5,000
Hardware Specification No The paper mentions training time in Table 10 but does not provide any specific hardware details such as GPU or CPU models, memory, or specific cloud instance types used for experiments.
Software Dependencies No The paper mentions using "Adam (Kingma & Ba, 2015) as the optimizer" and "random search (Bergstra & Bengio, 2012)" but does not specify version numbers for these or other software components like programming languages or libraries.
Experiment Setup Yes The hyperparameters are tuned by the random search (Bergstra & Bengio, 2012), including batch size b, self-adversarial sampling temperature α, fixed margin γ, learning rate lr, rotation dimension k, number of modified Householder reflections m for Householder projections, and regularization coefficient λ. The hyper-parameter search space is shown in Table 11.