KDDC: Knowledge-Driven Disentangled Causal Metric Learning for Pre-Travel Out-of-Town Recommendation

Authors: Yinghui Liu, Guojiang Shen, Chengyong Cui, Zhenzhen Zhao, Xiao Han, Jiaxin Du, Xiangyu Zhao, Xiangjie Kong

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on two real-world nationwide datasets display the consistent superiority of our KDDC over state-of-the-art baselines.
Researcher Affiliation Academia 1Zhejiang University of Technology 2City University of Hong Kong {Liu Ying Hui240, zhenzhenzhao 97}@outlook.com, gjshen1975@zjut.edu.cn, {hahayunc3c3y25, hahahenha, jiaxin.joyce.du}@gmail.com, xianzhao@cityu.edu.hk, xjkong@ieee.org
Pseudocode No No section or figure explicitly labeled “Pseudocode” or “Algorithm”.
Open Source Code Yes The code for the implementation of KDDC is available for reproducibility3. 3https://github.com/Yinghui-Liu/KDDC
Open Datasets Yes We chose two nationwide travel behavior datasets, Foursquare1 and Yelp2, to evaluate our framework. 1https://sites.google.com/site/yangdingqi/home/foursquaredataset 2https://www.yelp.com.tw/dataset
Dataset Splits Yes The two datasets are randomly partitioned based on users into three sets for training, validation, and testing following the proportions: 80%, 10% and 10%.
Hardware Specification No The paper mentions “We implemented our KDDC and experimented with Pytorch.” but does not specify any hardware details like GPU/CPU models or memory.
Software Dependencies No The paper mentions “We implemented our KDDC and experimented with Pytorch.” However, it does not provide specific version numbers for Pytorch or any other software, which is required for reproducibility.
Experiment Setup Yes the number of dimensions of all latent representations was set to 128. In the Knowledge Graph Segmented Pre-Training, n was 8 for Foursquare and 4 was for Yelp. In the optimization stage, λ1 and λ2 were set as 1, the optimizer was chosen as Adam with an initial learning rate of 0.001 and an L2 regularization with a weight of 10-5.