Towards Robust Trajectory Representations: Isolating Environmental Confounders with Causal Learning
Authors: Kang Luo, Yuanshao Zhu, Wei Chen, Kun Wang, Zhengyang Zhou, Sijie Ruan, Yuxuan Liang
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two real-world datasets verify that Traj CL markedly enhances performance in trajectory classification tasks while showcasing superior generalization and interpretability. |
| Researcher Affiliation | Academia | 1Hong Kong University of Science and Technology (Guangzhou) 2University of Science and Technology of China 3Beijing Institute of Technology |
| Pseudocode | No | The paper describes modules and components but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete statement or link regarding the availability of its source code. |
| Open Datasets | Yes | We conduct extensive experiments on two real-world datasets Geo Life [Zheng et al., 2009] and Grab Posisi [Huang et al., 2019]. |
| Dataset Splits | Yes | These trajectories are subsequently partitioned in an 8:1:1 ratio for training, validation, and test data. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper mentions using the Adam optimizer but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | Our model uses the Adam optimizer with the initial learning rate set to 0.001, reduced by 0.1 every 30 epochs. To avoid overfitting, we employ an early stopping with a patience of 20 epochs. The batch sizes for the Geo Life and Grab-Posisi datasets are 256 and 512. The default embedding dimensions are set to 64. For the predictor, we apply a 2-layer MLP uniformly. To initially merge local features from inputs, we employ two 3 × 1 convolutional layers. The weight parameters λ, φ, and η of the loss are 1, 0.5, and 0.5, respectively. |