LC-RNN: A Deep Learning Model for Traffic Speed Prediction
Authors: Zhongjian Lv, Jiajie Xu, Kai Zheng, Hongzhi Yin, Pengpeng Zhao, Xiaofang Zhou
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two real datasets demonstrate that our proposed LC-RNN outperforms seven well-known existing methods. |
| Researcher Affiliation | Collaboration | Zhongjian Lv1, Jiajie Xu1,2,3 , Kai Zheng4 , Hongzhi Yin5 , Pengpeng Zhao1, Xiaofang Zhou5,1 1 School of Computer Science and Technology, Soochow University, China 2 Provincial Key Laboratory for Computer Information Processing Technology, Soochow University 3 State Key Laboratory of Software Architecture (Neusoft Corporation), China 4 University of Electronic Science and Technology, China 5 The University of Queensland, Australia |
| Pseudocode | No | The paper describes the model and its components but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | No | The paper describes two datasets, Beijing and Shanghai, which were collected from trajectory data. However, it does not provide concrete access information such as a specific link, DOI, repository name, or formal citation for a publicly available or open dataset. |
| Dataset Splits | Yes | The data of the first 4 months were used as the training set, and the remaining 1 month as the test set. ... Among the data, the last 15 days are test set and the others are training set. ... We select 90% of the training data for training model, and the remaining 10% is chosen as the validation set with 3 early stopping. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'adam optimizer' but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions). |
| Experiment Setup | Yes | We train our network with the following hyper-parameters setting: mini-batch size (48), learning rate (0.0002) with adam optimizer, 1 A filters (32) and 2 A filters (16) in each LC layer. We select 90% of the training data for training model, and the remaining 10% is chosen as the validation set with 3 early stopping. |