Forecast the Plausible Paths in Crowd Scenes

Authors: Hang Su, Jun Zhu, Yinpeng Dong, Bo Zhang

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on public datasets demonstrate that our method obtains the state-of-the-art performance in both structured and unstructured scenes by exploring the complex and uncertain motion patterns, even if the occlusion is serious or the observed trajectories are noisy.
Researcher Affiliation Academia Hang Su, Jun Zhu, Yinpeng Dong, Bo Zhang Tsinghua National Lab for Information Science and Technology State Key Lab of Intelligent Technology and Systems Center for Bio-Inspired Computing Research Department of Computer Science and Technology, Tsinghua University, Beijing, China {suhangss,dcszj,dongyp13,dcszb}@tsinghua.edu.cn
Pseudocode No The paper describes the proposed method using figures and mathematical equations but does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement regarding the availability of source code or a link to a code repository.
Open Datasets Yes Experiments are conducted on two public datasets: the CUHK Crowd Dataset [Shao et al., 2014] that includes hundreds of crowd videos with different densities and perspective scales in many environments with each containing thousands of key point trajectories; and the subway station dataset [Zhou et al., 2011], which is a 30-minute sequence collected in the New York Grand Central Station, resulting in more than 40,000 keypoint trajectories in total.
Dataset Splits Yes In our experiments, we randomly select a half of the trajectories to train the model, and keep the rest for testing.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments.
Software Dependencies No The paper describes algorithms and methods used (e.g., LSTM, DGP, BPTT, SGD) but does not provide specific software names with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x).
Experiment Setup Yes In our experiments, we use a social-aware LSTM with 128 hidden units, i.e., the input trajectories are mapped to a 128-dimensional hidden feature vector (ht); moreover, we set the latent variational variable in deep Gaussian processes as 8-dimensional vectors (zt). Moreover, we use one LSTM layer, and a two-layer Gaussian process model in the social-aware LSTM and deep Gaussian processes modules, respectively.