Generative Causal Representation Learning for Out-of-Distribution Motion Forecasting

Authors: Shayan Shirahmad Gale Bagi, Zahra Gharaee, Oliver Schulte, Mark Crowley

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on synthetic and real-world motion forecasting datasets show the robustness and effectiveness of our proposed method for knowledge transfer under zero-shot and low-shot settings by substantially outperforming the prior motion forecasting models on out-of-distribution prediction.
Researcher Affiliation Academia 1Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Canada 2Department of Systems Design Engineering, University of Waterloo, Waterloo, Canada 3School of Computing Science, Simon Fraser University, Burnaby, Canada.
Pseudocode No The paper does not contain structured pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/sshirahmad/GCRL.
Open Datasets Yes ETH-UCY dataset This dataset contains the trajectory of 1536 detected pedestrians captured in five different environments {hotel, eth, univ, zara1, zara2}. All trajectories in the dataset are sampled every 0.4 seconds. Following the experimental settings of (Liu et al., 2022; Chen et al., 2021; Huang et al., 2019), we also use a leave-one-out approach for training and evaluating our model so to predict the future 4.8 seconds (12 frames), we utilize the previously observed 3.2 seconds (8 frames).
Dataset Splits Yes Each domain contains 10,000 trajectories for training, 3,000 trajectories for validation, and 5,000 trajectories for testing.
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments, such as exact GPU/CPU models, processor types, or memory amounts.
Software Dependencies No The paper does not provide specific software dependency details with version numbers (e.g., Python 3.x, PyTorch x.x, CUDA x.x) needed to replicate the experiment environment.
Experiment Setup Yes The list of all hyperparameters used by our model and their corresponding settings applied when conducting our experiments are represented in Tables 5 and 6. Our model is trained for 300 epochs in experiments conducted with the ETH-UCY dataset. We trained our method GCRL in the experiments with synthetic dataset for 250 epochs. We fine-tuned the trained models for the domain adaptation task for 100 epochs. The batch size is 64.