Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting
Authors: Yaguang Li, Rose Yu, Cyrus Shahabi, Yan Liu
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the framework on two real-world large scale road network traffic datasets and observe consistent improvement of 12% 15% over state-of-the-art baselines. We conducted extensive experiments on two large-scale real-world datasets, and the proposed approach obtains significant improvement over state-of-the-art baseline methods. |
| Researcher Affiliation | Academia | University of Southern California, California Institute of Technology {yaguang, shahabi, yanliu.cs}@usc.edu, rose@caltech.edu |
| Pseudocode | No | The paper describes mathematical equations for its models (e.g., DCGRU equations), and provides a system architecture diagram (Figure 2), but does not include any explicit pseudocode blocks or algorithms. |
| Open Source Code | Yes | 1The source code is available at https://github.com/liyaguang/DCRNN. |
| Open Datasets | Yes | 70% of data is used for training, 20% are used for testing while the remaining 10% for validation. |
| Dataset Splits | Yes | 70% of data is used for training, 20% are used for testing while the remaining 10% for validation. |
| Hardware Specification | No | The paper mentions using TensorFlow and an Adam optimizer, but it does not specify any hardware details such as CPU, GPU models, or memory specifications used for the experiments. |
| Software Dependencies | No | All neural network based approaches are implemented using Tensorflow (Abadi et al., 2016), and trained using the Adam optimizer with learning rate annealing. ARIMAkal: ... model is implemented using the statsmodel python package. VAR: ... model is implemented using the statsmodel python package. (Specific version numbers for TensorFlow or statsmodel are not provided.) |
| Experiment Setup | Yes | The best hyperparameters are chosen using the Tree-structured Parzen Estimator (TPE) (Bergstra et al., 2011) on the validation dataset. Detailed parameter settings for DCRNN as well as baselines are available in Appendix E. (Appendix E provides specific details for FNN, FC-LSTM, and DCRNN including hidden layer units, learning rates, epochs, dropout, weight decay, batch size, loss function, K value, and scheduled sampling parameters). |