CARPe Posterum: A Convolutional Approach for Real-Time Pedestrian Path Prediction

Authors: Matias Mendieta, Hamed Tabkhi2346-2354

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Notable results in both inference speed and prediction accuracy are achieved, improving FPS considerably in comparison to current state-of-the-art methods while delivering competitive accuracy on well-known path prediction datasets.
Researcher Affiliation Academia Mat ıas Mendieta, Hamed Tabkhi University of North Carolina at Charlotte mmendiet@uncc.edu, htabkhiv@uncc.edu
Pseudocode No The paper provides a high-level illustration of the proposed method in Figure 1 and a detailed model overview in Figure 2, but these are diagrams, not pseudocode or algorithm blocks.
Open Source Code Yes 1https://github.com/Te CSAR-UNCC/CARPe Posterum
Open Datasets Yes We evaluate our model on two widely used datasets in the path prediction domain, ETH (Pellegrini et al. 2009b) and UCY (Lerner, Chrysanthou, and Lischinski 2007).
Dataset Splits No The paper states, 'a leave-one-out approach is applied for training and testing among the five scenarios' but does not explicitly define a separate validation split with percentages or counts for reproducibility.
Hardware Specification Yes We implemented the model in Py Torch1 and trained it on an Nvidia Titan V GPU. In Table 2, we first compare the FPS of CARPe on an Nvidia P100 GPU as a baseline. For both GPU and single core CPU inference, CARPe provides an over 17x and 8x speedup respectively in comparison to Next. FPS numbers are reported on the Nvidia Jetson Nano embedded device for both GPU and single core CPU.
Software Dependencies No We implemented the model in Py Torch1 and trained it on an Nvidia Titan V GPU. An open-source Py Torch extension library for graph convolution (Fey and Lenssen 2019) was used as the basis for implementing the Graph Module. While PyTorch and a library are mentioned, no specific version numbers for the software are provided.
Experiment Setup Yes The model was trained end-to-end with a frame batch size of 64 for 80 epochs. We use the Adam (Kingma and Ba 2014) optimizer with a learning rate of 0.01 and a gradient clip of 5. A mean squared error loss was used for training.