MS-TIP: Imputation Aware Pedestrian Trajectory Prediction

Authors: Pranav Singh Chib, Achintya Nath, Paritosh Kabra, Ishu Gupta, Pravendra Singh

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present the quantitative and qualitative results of our approach. Details regarding the implementation are provided in the Appendix. We evaluated our method on the widely used publicly available human trajectory prediction benchmarks ETH, HOTEL, UNIV, ZARA1, and ZARA2. More details are provided in Table 1. We use popular assessment measures for trajectory prediction, such as Average Displacement Error (ADE) and Final Displacement Error (FDE).
Researcher Affiliation Academia 1Department of Computer Science and Engineering, Indian Institute of Technology, Roorkee, India. Correspondence to: Pranav Singh Chib <pranavs chib@cs.iitr.ac.in>, Pravendra Singh <pravendra.singh@cs.iitr.ac.in>.
Pseudocode No The paper describes its methods in prose and with diagrams (Figure 1, Figure 2), but it does not include any formal pseudocode or algorithm blocks.
Open Source Code Yes Code is publicly available at https: //github.com/Pranav-chib/MS-TIP.
Open Datasets Yes We evaluated our method on the widely used publicly available human trajectory prediction benchmarks ETH, HOTEL, UNIV, ZARA1, and ZARA2. More details are provided in Table 1.
Dataset Splits No The paper mentions evaluating on public benchmarks (ETH, HOTEL, UNIV, ZARA1, and ZARA2) and describes observed/predicted frames, but it does not explicitly provide details about training, validation, and test dataset splits (e.g., percentages or sample counts) needed to reproduce the data partitioning for these benchmarks.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or cloud computing resources used for running its experiments.
Software Dependencies Yes We use Python 3.8.13 and Py Torch version 1.13.1+cu117.
Experiment Setup Yes The number of control points K (refer 3.6) is set to 3. The number of endpoints sampled from the GMM, Ωis set to 20. We employ a batch size of 128 during the training process, with the number of training epochs set to 512. The learning rate for the entire model is specified as 1 10 4. The optimiser used is SGD. Initially, we pretrain the imputation model over the entire dataset for making a reasonable initial imputation. We use Adam optimizer with a learning rate of 1 10 4 and weight decay of 1 10 5 /2 for this pretraining. Subsequently, we fine-tune the imputation model using a learning rate of 1 10 5.