Complementary Attention Gated Network for Pedestrian Trajectory Prediction

Authors: Jinghai Duan, Le Wang, Chengjiang Long, Sanping Zhou, Fang Zheng, Liushuai Shi, Gang Hua542-550

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on benchmark datasets, i.e., the ETH, and the UCY, demonstrate that our method outperforms state-of-the-art methods by 13.8% in Average Displacement Error (ADE) and 10.4% in Final Displacement Error (FDE).
Researcher Affiliation Collaboration Jinghai Duan1, Le Wang2*, Chengjiang Long3 Sanping Zhou2, Fang Zheng1, Liushuai Shi1, Gang Hua4 1School of Software Engineering, Xi an Jiaotong University 2Institute of Artificial Intelligence and Robotics, Xi an Jiaotong University 3JD Finance America Corporation 4Wormpex AI Research
Pseudocode No The paper does not contain a pseudocode block or a clearly labeled algorithm.
Open Source Code Yes Code will be available at https://github.com/jinghai D/CAGN
Open Datasets Yes To evaluate our method, we conduct extensive experiments on the ETH (Pellegrini et al. 2009) and UCY (Lerner, Chrysanthou, and Lischinski 2007) datasets.
Dataset Splits No The paper states, "Following the recent method (Sun, Jiang, and Lu 2020), we use the leave-one-out strategy for training on four scenarios and testing on the rest ones." This describes the training and testing split strategy but does not explicitly mention a separate validation dataset split.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions "The Adam optimizer is used to train our model" but does not provide specific software dependencies with version numbers for frameworks, libraries, or other tools (e.g., PyTorch, TensorFlow, Python version).
Experiment Setup Yes In our experiments, the embedding dimension De, Df and Dfinal are set to 8, the number of head H of the complementary block is set to 4, and the head of the dual-path attention is set to 1. The dimension of MLP in endpoint prediction is set to 64-128-256-128-64. The threshold ξ is empirically set to 0.5, and the nonlinear activation function of MLP is Re LU. The Adam optimizer is used to train our model by 650 epochs with a learning rate of 0.0003, decaying by 0.1 with an interval of 50. During testing, 20 trajectories are sampled from the learned mixed Gaussian distribution according to the weights of multiple Gaussian distributions. The trajectory closest to the ground truth is used to calculate ADE and FDE.