Automatically Inferring Data Quality for Spatiotemporal Forecasting
Authors: Sungyong Seo, Arash Mohegh, George Ban-Weiss, Yan Liu
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our proposed method on forecasting temperatures in Los Angeles. In this section, we evaluate DQ-LSTM on real-world climate datasets. |
| Researcher Affiliation | Academia | 1 Department of Computer Science and 2 Department of Civil and Environmental Engineering University of Southern California |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | We use real-world datasets on meteorological measurements from two commercial weather services, Weather Underground(WU) and Weather Bug(WB). 1https://www.wunderground.com/ 2http://weather.weatherbug.com/. These are general service websites, not specific dataset repositories or formal citations for the specific processed dataset used in the paper. |
| Dataset Splits | Yes | We split each dataset into three subsets: training, validation, and testing sets. The first 60% observations are used for training, the next 20% is used to tune hyperparameters (validation), and the remaining 20% is used to report error results (test). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper mentions algorithms and models used (e.g., Adam optimizer, LSTM, GCN) but does not provide specific software dependencies with version numbers (e.g., library names with versions). |
| Experiment Setup | Yes | We set a common lag length of k = 10. For the deep recurrent models, the k-steps previous observations are sequentially inputted to predict next values. All deep recurrent models have the same 50 hidden units and one fully connected layer (R50 1) that provides the target output. For GCN-LSTM and DQ-LSTM, we evaluate with different numbers of layers (K) of GCN. We set the dimension of the first (K = 1) and second (K = 2) hidden layer of GCN as 10 and 5, respectively, based on the cross validation. The final layer always provides a set of scalars for every vertex, and we set β = 0.05 for the L2 regularization of the final layer. We use the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 0.001 and a mean squared error objective. |