Harnessing Fourier Isovists and Geodesic Interaction for Long-Term Crowd Flow Prediction

Authors: Samuel S. Sohn, Seonghyeon Moon, Honglu Zhou, Mihee Lee, Sejong Yoon, Vladimir Pavlovic, Mubbasir Kapadia

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In order to evaluate the scalability of models to large and complex environments, which the only existing LTCFP dataset is unsuitable for, a new synthetic crowd dataset with both real and synthetic environments has been generated. In its nascent state, LTCFP has much to gain from our key contributions.
Researcher Affiliation Academia 1Rutgers University, USA 2The College of New Jersey, USA {sss286, sm2062, hz289, ml1323, vladimir, mk1353}@cs.rutgers.edu, yoons@tcnj.edu
Pseudocode No No pseudocode or algorithm blocks were found. The methodology is described using text and diagrams.
Open Source Code Yes The Supplementary Materials, dataset, and code are available at sssohn.github.io/Geo Interact Net.
Open Datasets Yes The Supplementary Materials, dataset, and code are available at sssohn.github.io/Geo Interact Net. [...] The 2 synthetic datasets consists of 8,000 total training and 2,400 total testing crowd scenarios with thousands of unique synthetic environments.
Dataset Splits No The paper specifies '8,000 total training and 2,400 total testing crowd scenarios' for synthetic datasets and that 'both real datasets for testing only'. It does not explicitly mention or specify a validation split or set of its own, only training and testing.
Hardware Specification Yes A machine with an Intel Core i9-9960X 3.10 GHz, 64GB RAM, and an NVIDIA Ge Force RTX 2080 Ti 11GB was used for all training and testing.
Software Dependencies No The paper mentions 'Adam optimization' and 'stochastic gradient descent' and specific deep learning models (U-Net, Attention U-Net, Seg Net, CAGE) but does not provide specific version numbers for any software libraries (e.g., PyTorch, TensorFlow, or Python versions).
Experiment Setup Yes Adam optimization [Kingma and Ba, 2014] was used for training U-Net, Attention U-Net, and GINet, while stochastic gradient descent was used for CAGE and Seg Net (with momentum = 0.9). Prior to training, the data was shuffled, and the batch size was set to 4. All models were then trained exclusively on both synthetic datasets for 100 epochs with a learning rate of 0.01 from {0.1, 0.01, 0.001}, which performed best across models. The loss function was set to Mean Absolute Error (MAE).