Ising-Traffic: Using Ising Machine Learning to Predict Traffic Congestion under Uncertainty
Authors: Zhenyu Pan, Anshujit Sharma, Jerry Yao-Chieh Hu, Zhuo Liu, Ang Li, Han Liu, Michael Huang, Tony Geng
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our evaluation shows Ising-Traffic delivers on average 98 speedups and 5% accuracy improvement over SOTA. Experimental results demonstrate that compared with 7 traditional SOTA methods, Ising-Traffic delivers on average 98 speedups with 5% accuracy improvement. |
| Researcher Affiliation | Academia | 1University of Rochester, 2 Northwestern University, 3 Pacific Northwest National Laboratory {zhenyupan, anshujit.sharma, zhuo.liu, michael.huang, tong.geng}@rochester.edu, {jhu, hanliu}@northwestern.edu, ang.li@pnnl.gov |
| Pseudocode | Yes | Algorithm 1: Inverse Reconstruct-Ising |
| Open Source Code | No | The paper does not provide a direct link or explicit statement about the availability of its source code. |
| Open Datasets | Yes | We use four real-world datasets (Q-traffic, PEMS4, PEMS8, PEMS-BAY). Q-traffic contains traffic speed per 15-minutes of 15,073 city road segments in Beijing, including 5856 time slices. PEMS4 (P4) contains speed data in San Francisco Area from 01/01 to 02/28 in 2018 with 307 freeway segments. PEMS8 (P8) contains speed data in San Bernardino from 06/01 to 08/31 in 2016 with 170 freeway segments. PEMSBAY (PB) contains speed data in Bay Area from 01/01 to 05/31 in 2017 with 325 freeway segments. |
| Dataset Splits | No | The paper mentions 'training data with different missing rates' and presents 'train / test' results in tables, but does not specify the explicit percentages, sample counts, or the methodology (e.g., random split, temporal split) for these splits, nor does it explicitly mention a validation set split. |
| Hardware Specification | Yes | The Forward Reconstruct-Ising of Ising-Traffic is performed on a simulated BRIM system (Afoakwa et al. 2021). The GPU and CPU used to evaluate the latency of baseline solutions and perform Forward Predict-Ising are Nvidia A100-40GB and Intel Xeon Gold 6330. |
| Software Dependencies | No | The paper mentions using deep learning methods like CNNs, RNNs, and GNNs, but does not provide specific version numbers for any software, libraries, or frameworks used in the experiments. |
| Experiment Setup | Yes | It only takes 10 epochs to train Predict-Ising to achieve high accuracy. We inject different levels of uncertainty in training and average their loss values to update the parameters, making the regression more robust to the traffic graph reconstruction tasks with random uncertainty from the real world. Ii argmin I{Equation (4)} through mini-batch gradient descent with learning rate decay. |