Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Deep Spatio-Temporal Residual Networks for Citywide Crowd Flows Prediction
Authors: Junbo Zhang, Yu Zheng, Dekang Qi
AAAI 2017 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on two types of crowd flows in Beijing and New York City (NYC) demonstrate that the proposed ST-Res Net outperforms six well-known methods. |
| Researcher Affiliation | Collaboration | 1Microsoft Research, Beijing, China 2School of Information Science and Technology, Southwest Jiaotong University, Chengdu, China 3School of Computer Science and Technology, Xidian University, China 4Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1 outlines the ST-Res Net training process. |
| Open Source Code | Yes | The code and datasets have been released at: https://www.microsoft.com/enus/research/publication/deep-spatio-temporal-residualnetworks-for-citywide-crowd-flows-prediction. |
| Open Datasets | Yes | The code and datasets have been released at: https://www.microsoft.com/enus/research/publication/deep-spatio-temporal-residualnetworks-for-citywide-crowd-flows-prediction. |
| Dataset Splits | Yes | We select 90% of the training data for training each model, and the remaining 10% is chosen as the validation set, which is used to early-stop our training algorithm for each model based on the best validation score. |
| Hardware Specification | No | The paper does not provide specific details on the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The python libraries, including Theano (Theano Development Team 2016) and Keras (Chollet 2015), are used to build our models. |
| Experiment Setup | Yes | The convolutions of Conv1 and all residual units use 64 filters of size 3 3, and Conv2 uses a convolution with 2 filters of size 3 3. The batch size is 32. ... For lengths of the three dependent sequences, we set them as: lc {3, 4, 5}, lp {1, 2, 3, 4}, lq {1, 2, 3, 4}. |