Disentangled Multi-Relational Graph Convolutional Network for Pedestrian Trajectory Prediction

Authors: Inhwan Bae, Hae-Gon Jeon911-919

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through the effective incorporation of the three parts within an end-to-end framework, DMRGCN achieves state-of-the-art performances on a variety of challenging trajectory prediction benchmarks. . . . 4 Experiments In this section, we present our experimental results for two public datasets: ETH (Pellegrini et al. 2009) and UCY (Lerner, Chrysanthou, and Lischinski 2007) which contain pedestrian trajectories and various real-world human interactions. . . . Table 1: Comparison of our DMRGCN with other state-of-the-art methods (ADE/FDE). . . . 4.2 Ablation Studies An extensive ablation study was conducted to examine the effects of different components on DMRGCN performance.
Researcher Affiliation Academia Inhwan Bae and Hae-Gon Jeon Gwangju Institute of Science and Technology (GIST) inhwanbae@gm.gist.ac.kr and haegonj@gist.ac.kr
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link indicating that its source code is publicly available.
Open Datasets Yes In this section, we present our experimental results for two public datasets: ETH (Pellegrini et al. 2009) and UCY (Lerner, Chrysanthou, and Lischinski 2007) which contain pedestrian trajectories and various real-world human interactions. Both datasets include 5 subsets (ETH, HOTEL, UNIV, ZARA1 and ZARA2).
Dataset Splits Yes Following previous works (Alahi et al. 2016; Gupta et al. 2018; Kosaraju et al. 2019; Huang et al. 2019; Mohamed et al. 2020; Sun, Zhao, and He 2020), we adopt a leaveone-out evaluation strategy, in which four datasets are used for training and the remaining one is used for testing.
Hardware Specification Yes The training is performed on a NVIDIA 2080Ti GPU, which usually takes 12 hours.
Software Dependencies No The paper mentions the use of TCNs and PReLU as activation functions but does not specify version numbers for any software libraries or dependencies (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes We use one GCN and four TPCNN blocks, which shows the best results in ablation studies. Our model is trained for 256 epochs with the SGD optimizer. We use a mini-batch size of 128 with an initial learning rate 1e 4 and decay of rate 0.8 every 32 epochs. Data augmentation schemes such as random rotation, flip and scaling are utilized.