Overcoming Forgetting in Fine-Grained Urban Flow Inference via Adaptive Knowledge Replay

Authors: Haoyang Yu, Xovee Xu, Ting Zhong, Fan Zhou

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four large-scale real-world FUFI datasets demonstrate that our proposed model consistently outperforms strong baselines and effectively mitigates the forgetting problem.
Researcher Affiliation Academia University of Electronic Science and Technology of China, Chengdu, Sichuan 610054, China haoyang.yu417@outlook.com, xovee@live.com, {zhongting, fan.zhou}@uestc.edu.cn
Pseudocode Yes Algorithm 1: Adaptive Knowledge Replay (AKR)
Open Source Code Yes Source code is available at: https://github.com/Patton Yu/CUFAR.
Open Datasets Yes Experiments are conducted on four real-world taxi traffic datasets collected continuously for four years (2013 to 2016) in Beijing (Liang et al. 2019). We denote the four datasets as Taxi BJ Task-1 to Task-4.
Dataset Splits No The paper mentions 'validation losses' and 'validation set' implicitly but does not provide specific details about the training/validation/test dataset splits (e.g., percentages, sample counts, or explicit splitting methodology).
Hardware Specification Yes All experiments are conducted on RTX 3090 with PyTorch.
Software Dependencies No The paper mentions 'PyTorch' but does not specify a version number or other software dependencies with their versions.
Experiment Setup Yes The optimizer is Adam, learning rate is 1e 4, filter size F is 128, temporal conv layers K is 15 (hourly from 9AM to 12PM), memory buffer size S is 1,000, batch B and BM s sizes are 16 and 2, respectively. The resolution of the flow map Xfg is 128x128, the upscaling factor N is 4.