A Set of Control Points Conditioned Pedestrian Trajectory Prediction
Authors: Inhwan Bae, Hae-Gon Jeon
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In experiments, the proposed network achieves state-of-the-art performance on various real-world trajectory prediction benchmarks. |
| Researcher Affiliation | Academia | Gwangju Institute of Science and Technology (GIST) inhwanbae@gm.gist.ac.kr and haegonj@gist.ac.kr |
| Pseudocode | No | The paper describes the methodology using mathematical formulas and descriptions but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any explicit statement about releasing the source code or provide a link to a code repository for the methodology described. |
| Open Datasets | Yes | We evaluate our Graph-TERN using four real-world public datasets: ETH (Pellegrini et al. 2009), UCY (Lerner, Chrysanthou, and Lischinski 2007), Stanford Drone Dataset (SDD) (Robicquet et al. 2016), and Train Station dataset (Yi, Li, and Wang 2015). |
| Dataset Splits | Yes | For strictly fair comparison with state-of-the-art models and our ablation study, we follow a standard evaluation protocol in (Gupta et al. 2018). |
| Hardware Specification | Yes | We train our model using a SGD optimizer with a batch size of 128 and learning rate of 1e 4 for 512 epochs, which usually takes one day on a machine with an NVIDIA 2080Ti GPU. |
| Software Dependencies | No | The paper mentions using an SGD optimizer and PReLU activation but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | We train our model using a SGD optimizer with a batch size of 128 and learning rate of 1e 4 for 512 epochs, which usually takes one day on a machine with an NVIDIA 2080Ti GPU. Drop Edge (Rong et al. 2020) with 0.8 rate is used for the GCN layer, and PRe LU activation is used for all layers. Data augmentation schemes like random flip, rotation, and scaling are performed during the training phase. |