Why We Go Where We Go: Profiling User Decisions on Choosing POIs
Authors: Renjun Hu, Xinjiang Lu, Chuanren Liu, Yanyan Li, Hao Liu, Jingjing Gu, Shuai Ma, Hui Xiong
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, using two real-world data sets, we conduct extensive experiments. We find that PROUD outperforms baselines by at least 30% for preserving decision structures. Also, our case study demonstrates that the identified key factors are reasonable and insightful. In this section we conduct extensive experiments to evaluate our PROUD. |
| Researcher Affiliation | Collaboration | Renjun Hu1 , Xinjiang Lu2 , Chuanren Liu3 , Yanyan Li2 , Hao Liu2 , Jingjing Gu4 , Shuai Ma1 and Hui Xiong5 1SKLSDE Lab, Beihang University & Beijing Adv. Inno. Center for Big Data and Brain Computing 2Business Intelligence Lab, Baidu Research 3Business Analytics and Statistics, University of Tennessee, Knoxville 4Nanjing University of Aeronautics and Astronautics 5Rutgers University |
| Pseudocode | No | The paper describes the proposed model and its components in text and with a framework overview diagram (Fig. 3), but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement about releasing source code for the described methodology or provide a link to a code repository. |
| Open Datasets | Yes | NYC was produced based on a public Foursquare checkin data set [Yang et al., 2015]. |
| Dataset Splits | Yes | For each data set, we randomly split the data into 70% for training, 10% for validation, and 20% for testing. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions techniques like 'Adam optimizer', 'Dropout', and 'sparsemax' but does not specify any software names with version numbers for libraries or frameworks used in the implementation (e.g., Python, TensorFlow, PyTorch versions). |
| Experiment Setup | Yes | We used the Adam optimizer and a batch size of 512 to train PROUD. The learning rate γ was set to 0.01 at first and decayed to 0.7γ after each epoch. We employed (i) an L2 regularization with weight 10 5, (ii) a dropout with Pdrop = 0.2 in Eq. (3), and (iii) an early stopping if the F1 on validation set did not increase in 5 epochs. The number d of dimensions was fixed to 64. |