Event-Aware Multimodal Mobility Nowcasting

Authors: Zhaonan Wang, Renhe Jiang, Hao Xue, Flora D. Salim, Xuan Song, Ryosuke Shibasaki4228-4236

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The enhanced event-aware spatio-temporal network, namely EAST-Net, is evaluated on several real-world datasets with a wide variety and coverage of societal events. Both quantitative and qualitative experimental results verify the superiority of our approach compared with the state-of-the-art baselines.
Researcher Affiliation Academia Zhaonan Wang1,3 , Renhe Jiang1,2 , Hao Xue3, Flora D. Salim3, Xuan Song1, Ryosuke Shibasaki1 1 Center for Spatial Information Science, University of Tokyo; 2 Information Technology Center, University of Tokyo 3 School of Computing Technologies, RMIT University
Pseudocode No No pseudocode or algorithm blocks were explicitly labeled or formatted as such.
Open Source Code Yes Code and data are published on https://github.com/underdoc-wang/EAST-Net.
Open Datasets Yes We chronologically split each dataset for training, validation, testing with a ratio of 7 : 1 : 2, such that the lengths of test sets are roughly last 20 days for JONAS-{NYC, DC}, 110 days for COVID-CHI, and 40 days for COVID-US.
Dataset Splits Yes We chronologically split each dataset for training, validation, testing with a ratio of 7 : 1 : 2, such that the lengths of test sets are roughly last 20 days for JONAS-{NYC, DC}, 110 days for COVID-CHI, and 40 days for COVID-US.
Hardware Specification Yes We implement EAST-Net with Py Torch and carry out experiments on a GPU server with NVIDIA Ge Force GTX 1080 Ti graphic cards.
Software Dependencies No The paper mentions 'Py Torch' but does not specify a version number or other software dependencies with their versions.
Experiment Setup Yes Lengths of observational and nowcasting sequences are set to α = 8 and β = 8, respectively; number of GCRU layers L = 2 with approximation order K = 3 and hidden dimension q = 32; embedding dimensions for Tcov v = 2, µ(sp) = 20 and µ(mo) = 3; mobility prototype memory m = 8 and D = 16. For model training, batch size = 32; learning rate = 5 10 4; maximum epoch = 100 with an early stopper with a patience of 10; MAE is chosen to be optimized using Adam.