Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
RedMotion: Motion Prediction via Redundancy Reduction
Authors: Royden Wagner, Omer Sahin Tas, Marvin Klemp, Carlos Fernandez, Christoph Stiller
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments reveal that our representation learning approach outperforms Pre Tra M, Traj-MAE, and Graph DINO in a semi-supervised setting. Moreover, Red Motion achieves competitive results compared to HPTR or MTR++ in the Waymo Motion Prediction Challenge. Our open-source implementation is available at: https://github.com/kit-mrt/future-motion |
| Researcher Affiliation | Academia | 1Karlsruhe Institute of Technology 2FZI Research Center for Information Technology |
| Pseudocode | No | The paper describes the method in detail with figures and text, but it does not include any specific pseudocode blocks or algorithms. |
| Open Source Code | Yes | Our open-source implementation is available at: https://github.com/kit-mrt/future-motion |
| Open Datasets | Yes | We use the official training and validation splits of the Waymo Open Motion dataset (Ettinger et al., 2021) version 1.0 and the Argoverse 2 Forecasting dataset (Wilson et al., 2021) as training and validation data. |
| Dataset Splits | Yes | We use the official training and validation splits of the Waymo Open Motion dataset (Ettinger et al., 2021) version 1.0 and the Argoverse 2 Forecasting dataset (Wilson et al., 2021) as training and validation data. Since pre-training is particularly useful when little annotated data is available, we use 100% of the training data for pre-training and fine-tune on only 12.5%, following common practice in self-supervised learning (Balestriero et al., 2023). |
| Hardware Specification | Yes | We pre-train and fine-tune all configurations for 4 hours and 8 hours using data-parallel training on 4 A100 GPUs. |
| Software Dependencies | No | The paper mentions using Adam W as an optimizer and PyTorch for implementation, but it does not specify version numbers for these software components or any other libraries. |
| Experiment Setup | Yes | For pre-training and fine-tuning, we use Adam W (Loshchilov & Hutter, 2019) as the optimizer. The initial learning rate is set to 10^-4 and reduced to 10^-6 using a cosine annealing learning rate scheduler (Loshchilov & Hutter, 2016). We pre-train and fine-tune all configurations for 4 hours and 8 hours using data-parallel training on 4 A100 GPUs. Following Konev et al. (2022), we minimize the negative multivariate log-likelihood loss for fine-tuning on motion prediction. For Traj-MAE pre-training, we mask 60% of the road environment tokens and train to reconstruct them. Our model is computed considering 6 trajectory proposals per agent. We use an attention window size of 16 tokens. |