NeuCast: Seasonal Neural Forecast of Power Grid Time Series
Authors: Pudi Chen, Shenghua Liu, Chuan Shi, Bryan Hooi, Bai Wang, Xueqi Cheng
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on 134 real-word datasets show the improvements of Neu Cast over the stateof-the-art methods. |
| Researcher Affiliation | Academia | Pudi Chen1,2, Shenghua Liu3,4, , Chuan Shi1,2, , Bryan Hooi5, Bai Wang1,2 and Xueqi Cheng3,4 1 Beijing Key Lab of Intelligent Telecommunications Software and Multimedia 2 Beijing University of Posts and Telecommunications 3 CAS Key Laboratory of Network Data Science and Technology 4 Institute of Computing Technology, Chinese Academy of Sciences 5 School of Computer Science, Carnegie Mellon University {chenpudigege,shichuan,wangbai}@bupt.edu.cn, {liushenghua,cxq}@ict.ac.cn, bhooi@andrew.cmu.edu |
| Pseudocode | Yes | Algorithm 1 Neu Cast Algorithm |
| Open Source Code | Yes | Reproducibility: Our code is publicly available at https://github.com/chenpudigege/Neu Cast |
| Open Datasets | No | The paper mentions using a 'publicly available data from Carnegie Mellon University (CMU)' but does not provide a specific link, DOI, or full bibliographic citation to access this dataset. |
| Dataset Splits | Yes | We use 323-day-long time series in each location for training, and forecast the next 5 days. We use the data of the first 18 days for training. |
| Hardware Specification | Yes | We conduct our experiments on a server with NVIDIA Ge Force GTX1080 2, and implement Neu Cast based on Keras. |
| Software Dependencies | No | The paper states 'implement Neu Cast based on Keras', but does not specify a version number for Keras or any other software dependency. |
| Experiment Setup | Yes | In the neural network, the latent vector size is 4, and each MLP unit has one hidden layer with 16 neurons. In the training, the batch size is 32, and learning rate is 0.0005. A hyper parameter α in high-level distinct pattern recognition [Matsubara et al., 2014] decides how many distinct patterns we should identify. With validation on the time series data, we found that when α = 0.2, we can get stable and better forecast accuracy. And we set the number of maximum epoch K=2. |