Quaternion Ordinal Embedding

Authors: Wenzheng Hou, Qianqian Xu, Ke Ma, Qianxiu Hao, Qingming Huang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct extensive experiments to verify the effectiveness of our proposed method on one synthetic dataset and three real-world datasets.
Researcher Affiliation Academia Wenzheng Hou1,2 , Qianqian Xu1 , Ke Ma2 , Qianxiu Hao1,2 and Qingming Huang1,2,3,4 1Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, CAS 2School of Computer Science and Technology, University of Chinese Academy of Sciences 3Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences 4Artificial Intelligence Research Center, Peng Cheng Laboratory houwenzheng20@mails.ucas.ac.cn, xuqianqian@ict.ac.cn, make@ucas.ac.cn haoqianxiu19@mails.ucas.ac.cn qmhuang@ucas.ac.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets No We show a detailed description of the four datasets in the Appendix B.
Dataset Splits No We adopt testing error as our evaluation metric, which is defined as the ratio of the wrongly predicted triples in the test set.
Hardware Specification Yes All experiments are conducted on Ubuntu 16.04.6 LTS, with NVIDIA TITAN RTX.
Software Dependencies Yes The algorithm is written in Python 3.6.8 and uses the deep learning framework called Tensor Flow 1.14.
Experiment Setup Yes We apply random initialization for embedding and choose Adam [Kingma and Ba, 2015] as optimizer. The batch size is set to 512, learning rating λ = 0.1 and the number of epochs is set to 200 for all methods.