Contrastive Pre-training with Adversarial Perturbations for Check-In Sequence Representation Learning
Authors: Letian Gong, Youfang Lin, Shengnan Guo, Yan Lin, Tianyi Wang, Erwen Zheng, Zeyu Zhou, Huaiyu Wan
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness and versatility of CACSR on two kinds of downstream tasks using three real-world datasets. The results show that our model outperforms both the state-of-the-art pre-training methods and the end-to-end models. |
| Researcher Affiliation | Academia | 1School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China 2Beijing Key Laboratory of Traffic Data Analysis and Mining, Beijing, China |
| Pseudocode | No | The paper includes figures illustrating the model architecture and workflow, but it does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code has been released at: https://github.com/LetianGong/CACSR |
| Open Datasets | No | The paper mentions using "three real-world datasets derived from the raw Gowalla, and Foursquare of New York City (NYC) and Jakarta (JKT) check-in data." However, it does not provide specific links, DOIs, or citations with author/year information for direct access to these derived datasets or the raw data from which they were obtained. |
| Dataset Splits | Yes | We split all datasets at ratio 6 : 2 : 2 into training sets, validation sets, and test sets by the samples. |
| Hardware Specification | Yes | All trials have been conducted on Intel Xeon E5-2620 CPUs and NVIDIA RTX A5000 GPUs. |
| Software Dependencies | No | The paper mentions using "Py Torch" but does not specify a version number (e.g., PyTorch 1.9), nor does it list any other software components with their versions. |
| Experiment Setup | Yes | As for the parameter settings, we set all embedding sizes of all models to 256. The number layer of Bi-LSTM in CACSR model is set to 3, the hidden state size is set to 512, σ = 0.1, the scale factor η = 1, ϵ = 1, α = 0.8, β = 0.5, and τ = 4. The CACSR is pre-trained for 100 epochs on the training sets with the early-stopping mechanism of 5 patience. |