DenseKoopman: A Plug-and-Play Framework for Dense Pedestrian Trajectory Prediction
Authors: Xianbang Li, Yilong Ren, Han Jiang, Haiyang Yu, Yanlei Cui, Liang Xu
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on pedestrian trajectory prediction benchmarks demonstrate the superiority of the proposed framework. We also conducted an analysis of the data transformation to explore how our Dense Koopman framework works with each validation method and uncovers motion patterns that may be hidden within the trajectory data. Code is available at https://github.com/lixianbang/Dense Koopman. |
| Researcher Affiliation | Academia | 1School of Transportation Science and Engineering, Beihang University 2State Key Lab of Intelligent Transportation System, Beihang University 3Hangzhou Innovation Institute, Beihang University |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/lixianbang/Dense Koopman. |
| Open Datasets | Yes | We evaluated our method on three public datasets containing dense pedestrian scenarios, namely, Multiple Object Tracking 20 [Dendorfer et al., 2020], Head Tracking 21 [Sundararaman et al., 2021], VSCrowd [Li et al., 2022]. |
| Dataset Splits | Yes | For each dataset, we implemented the leave-one-out cross-validation approach, where the model was trained on four scenarios and then tested on the remaining scene [Salzmann et al., 2020; Kosaraju et al., 2019]. |
| Hardware Specification | No | The paper does not specify the exact hardware used for the experiments, such as GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper mentions the use of CNN, LSTM, and Stable Diffusion frameworks but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | The paths are sampled at intervals of 0.4 seconds, with the initial 3.2 seconds of a path serving as observed data for forecasting the subsequent 4.8 seconds of future trajectory. Therefore, we anticipate the future trajectories of pedestrians over a time frame of 12 frames based on their observed movements over a period of 8 frames. The sampling of trajectories is conducted at a frame rate of 2.5 frames per second. To verify the superiority of the proposed framework in crowded scenarios containing at least 20 people, we process data with [20 pedestrians, 8 seconds] as the spatiotemporal window. |