Social Interpretable Tree for Pedestrian Trajectory Prediction

Authors: Liushuai Shi, Le Wang, Chengjiang Long, Sanping Zhou, Fang Zheng, Nanning Zheng, Gang Hua2235-2243

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Despite the hand-crafted tree, the experimental results on ETH-UCY and Stanford Drone datasets demonstrate that our method is capable of matching or exceeding the performance of state-of-the-art methods. Interestingly, the experiments show that the raw built tree without training outperforms many prior deep neural network based approaches.
Researcher Affiliation Collaboration 1School of Software Engineering, Xi an Jiaotong University 2Institute of Artificial Intelligence and Robotics, Xi an Jiaotong University 3JD Finance America Corporation 4Wormpex AI Research
Pseudocode No The paper describes its methodology in text and provides an overall framework diagram (Figure 3), but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing source code or providing a link to a code repository.
Open Datasets Yes To evaluate the effectiveness of our method, we conduct extensive experiments on two widely used datasets, i.e., ETH-UCY (Pellegrini et al. 2009; Lerner, Chrysanthou, and Lischinski 2007) and Stanford Drone Dataset (SDD) (Robicquet et al. 2016), in pedestrian trajectory prediction.
Dataset Splits Yes For ETH-UCY, we follow the leave-one-out strategy (Shi et al. 2021) for training and evaluation, which the model is trained on four scenes and evaluated on the rest of the scene. For SDD, we use prior train-test split (Mangalam et al. 2020) for evaluation.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or cloud computing instances used for running the experiments.
Software Dependencies No The paper mentions implementing encoding modules with MLP and using GCN and Huber loss, but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes To implement the proposed method, all encoding modules are implemented by a 3-layer MLP with the PRelu non-linearity. We split the coarse trajectory tree three times and generates 27 paths. Other hyperparameters of this tree are recorded in Appendix due to space limitation. All the coefficients λ1, λ2, and λ3 of total loss are set to 1.