Predicting Traffic Congestion Evolution: A Deep Meta Learning Approach

Authors: Yidan Sun, Guiyuan Jiang, Siew Kei Lam, Peilan He

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Compared to all state-of-the-art methods, our framework achieves significantly better prediction performance on two congestion evolution behaviors (propagation and decay) when evaluated using real-world dataset.
Researcher Affiliation Academia Yidan Sun , Guiyuan Jiang , Siew-Kei Lam and Peilan He Nanyang Technological University, Singapore ysun014@e.ntu.edu.sg, {gyjiang, assklam}@ntu.edu.sg, phe002@e.ntu.edu.sg.
Pseudocode No The paper describes the DMLM model components and their interactions through textual descriptions and diagrams (Figure 1, 3), but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes The source code is available at https://github.com/Helena YD/DMLM.
Open Datasets Yes The road network for our experiments is obtained from Open Street Map1... POIs. The POIs are collected from government website 2... Traffic Data. The historical traffic speeds are calculated based on bus trajectories derived from bus arrival data3. Footnotes link to: "1https://www.openstreetmap.org/export", "2https://data.gov.sg/dataset?q=Places+of+Interest", "3https://www.mytransport.sg/content/mytransport/home/data Mall .html"
Dataset Splits No The traffic data is collected from Aug. 01 to Nov. 30, 2018, where the data of the first 90 days (75%) is used for training and the remaining data is used for testing.
Hardware Specification Yes We implemented the DMLM model using Py Torch framework on Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz with 32G RAM.
Software Dependencies No We implemented the DMLM model using Py Torch framework. No version number for PyTorch is specified.
Experiment Setup Yes The critical hyperparameters are optimized via grid search as follows. Size of hidden layers of meta learners s FCNs in Meta-LSTM and Meta-Attention: ˆDml h = 64, ˆDma h = 8; Size of hidden and output layers in lstm-l and lstm-a: Dlml h = 16, Dlml o = 8, Dlma h = 8, Dlma o = 32; Size of hidden and output layers in fcn-l and fcn-a: Df ml h = 4, Df ml o = 16, Df ma h = 4, Df ma o = 8; and the dimension of final representations K = 4. ... In addition, the learning rate is 0.0001 and the batch size is 40.