Enhancing Fine-Grained Urban Flow Inference via Incremental Neural Operator

Authors: Qiang Gao, Xiaolong Song, Li Huang, Goce Trajcevski, Fan Zhou, Xueqin Chen

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on large-scale real-world datasets demonstrate the superiority of our proposed solution against the baselines.
Researcher Affiliation Academia Qiang Gao1 , Xiaolong Song1 , Li Huang1 , Goce Trajcevski2 , Fan Zhou3 and Xueqin Chen4 1Southwestern University of Finance and Economics, Chengdu, China, 611130 2Iowa State University, Iowa, USA 3University of Electronic Science and Technology of China, Chengdu, China, 610054 4Delft University of Technology, Delft, Netherlands, 2628CN
Pseudocode No The paper describes the model architecture and incremental enhancement methods in detail, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes For reproducibility, our source codes are available at https://github.com/Longsuni/UNO.git.
Open Datasets Yes Following [Yu et al., 2023], we conduct experiments on four real-world taxi traffic datasets collected continuously for four years (from 2013 to 2016) in Beijing city.
Dataset Splits No The paper refers to 'validation performance (i.e., MSE loss)' and 'validation losses' but does not explicitly provide specific dataset split information (percentages or counts) for training, validation, or test sets. It only mentions 'randomly select a batch of flow maps' for optimization.
Hardware Specification Yes Methods are implemented using Py Torch, accelerated by one NVIDIA Ge Force-RTX-4090.
Software Dependencies No The paper states 'Methods are implemented using Py Torch' but does not specify the version number of PyTorch or any other software libraries with their respective versions.
Experiment Setup Yes The Adam optimizer is employed with an initial learning rate of 1𝑒 4 and training epoch set to 60. To prevent overfitting, we apply the learning rate decay trick and set the dropout rate to 0.3. The batch size is set to 16.