Out-of-Town Recommendation with Travel Intention Modeling

Authors: Haoran Xin, Xinjiang Lu, Tong Xu, Hao Liu, Jingjing Gu, Dejing Dou, Hui Xiong4529-4536

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world data sets validate the effectiveness of the TRAINOR framework.
Researcher Affiliation Collaboration Haoran Xin,1, 2 Xinjiang Lu,2, 3* Tong Xu,1 Hao Liu,2, 3 Jingjing Gu,4 Dejing Dou,2, 3 Hui Xiong5 1University of Science and Technology of China 2Business Intelligence Lab, Baidu Research 3National Engineering Laboratory of Deep Learning Technology and Application, China 4Nanjing University of Aeronautics and Astronautics 5Rutgers University
Pseudocode No The paper describes the model and processes using text and mathematical equations but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets No The paper uses 'three real-world travel behavior datasets including BJ SH, SH HZ and GZ FS' but does not provide any concrete access information (link, DOI, specific repository, or formal citation with authors/year) for them to be considered publicly available.
Dataset Splits Yes Then, we randomly split users following the proportions: 80%, 10%, and 10% to form a training set, a test set, and a validation set.
Hardware Specification No The paper does not specify any details regarding the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'Adam optimizer' and 'G-GNN model' but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes The number d (i.e. the hidden size) was fixed to 128 for all latent representations. And, the number of layers in G-GNN was set to 1. In the travel intention discovery module, we set the topic number K as 15 for better explanation. In the joint training stage, we set λ1 = λ2 = λ3 = 1 in Eq. (18). We used Adam optimizer to train our approach with an initial learning rate as 0.001 and an L2 regularization with weight 10^-5.