Blue Skies: A Methodology for Data-Driven Clear Sky Modelling
Authors: Kartik Palani, Ramachandra Kota, Amar Prakash Azad, Vijay Arya
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In order to empirically validate our methodology, we conducted experiments and evaluated the performance of both the components of our approach: (i) generating clear sky dataset and (ii) learning clear sky model. For evaluation, our irradiance dataset consisted of GHI pyranometer measurements at 1-minute resolution from three different locations. |
| Researcher Affiliation | Collaboration | Kartik Palani , Ramachandra Kota+, Amar Prakash Azad+, Vijay Arya+ University of Illinois at Urbana-Champaign, USA +IBM Research, India palani2@illinois.edu, {rama.chandra, amarazad, vijay.arya}@in.ibm.com |
| Pseudocode | No | The paper describes the methodology in prose sections (e.g., "Stage 1: Base Model", "Stage 2: Generating Clear Sky Dataset"), but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific links, explicit statements, or references to supplementary materials for the open-source code of the methodology described. |
| Open Datasets | Yes | For evaluation, our irradiance dataset consisted of GHI pyranometer measurements at 1-minute resolution from three different locations. ... 1. Tucson, Arizona, US [Andreas and Wilcox, 2010]... Finally, to evaluate our approach, we utilize NREL s National Solar Radiation Database (NSRDB) [Wilcox, 2007]. |
| Dataset Splits | Yes | For our experiments, we split the dataset into train-test parts in the following manner. For Tucson and Seria, where the dataset spans several years, some years were used for training and others for test. For Tucson, 2011, 2012, 2013 & 2015 were in the training set and 2010, 2014 & 2016 in the test set. For Seria, 2012 & 2014 were used for training and 2013 for test. Since the Bangalore dataset only spanned 12 months, 80% of the days in each month (24 days) were used for training and rest 20% for test. |
| Hardware Specification | No | The paper describes the experimental setup and datasets but does not explicitly mention the specific hardware (e.g., CPU, GPU models) used to run the experiments. |
| Software Dependencies | No | The paper mentions the use of algorithms and tools like "NREL s Solar Position Algorithm", "DBSCAN [Ester et al., 1996]", and "Levenberg-Marquardt algorithm", but it does not provide specific version numbers for any of the software dependencies used in the experiments. |
| Experiment Setup | No | The paper describes the learning methods and data splitting strategy (e.g., "L2-minimization is obtained using Levenberg-Marquardt algorithm", "split the dataset into train-test parts", "divided the data into 5 parts, each part corresponding to a season"), but it does not provide specific numerical hyperparameters such as learning rates, batch sizes, number of epochs, or detailed optimizer settings for the training process. |