Enhancing Urban Flow Maps via Neural ODEs

Authors: Fan Zhou, Liang Li, Ting Zhong, Goce Trajcevski, Kunpeng Zhang, Jiahao Wang

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental evaluations on two realworld datasets demonstrate that FODE significantly outperforms several baseline approaches.
Researcher Affiliation Academia 1School of Information and Software Engineering, University of Electronic Science and Technology of China 2Iowa State University, Ames IA 3University of Maryland, College Park MD
Pseudocode Yes Algorithm 1 Gradient calculation in FODE.
Open Source Code Yes We note that the details of other network settings are described in the source-implementation2. 2https://github.com/Anewnoob/FODE
Open Datasets Yes We evaluate all the methods using two real-world urban flow datasets: (1) Taxi BJ [Liang et al., 2019] a taxi GPS data including taxi flows from July 1, 2014 to October 31, 2014; and (2) Bike NYC collected from an open website1 which contains data from January 1, 2019 to June 30, 2019. 1https://www.citibikenyc.com/system-data
Dataset Splits No The paper mentions performing a test on the validation set during training, but does not provide specific split percentages, counts, or a method for reproducing the validation split.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'Adam' optimizer and 'Dopri5 numerical method' but does not provide version numbers for these or any other software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used.
Experiment Setup Yes Adam [Kingma and Ba, 2014] is adopted to train FODE with batch size 16 and learning rate e 4. We leverage Dopri5 numerical method, which can adaptively choose the step size, as ODESovle in FODE. FODE consists of 128 channels and 1 ODE block. We also present a simplified version S-FODE which contains 64 channels while other components are the same as FODE. During training, we halve the learning rate and perform a test on the validation set every 20 epochs.