Subtask Gated Networks for Non-Intrusive Load Monitoring

Authors: Changho Shin, Sunghwan Joo, Jaeryun Yim, Hyoseop Lee, Taesup Moon, Wonjong Rhee1150-1157

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our proposed methods on the two realworld datasets, REDD (Kolter and Johnson 2011) and UKDALE (Kelly and Knottenbelt 2014).
Researcher Affiliation Collaboration 1Encored Technologies, Seoul, Korea 2Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Korea 3Department of Transdisciplinary Studies, Seoul National University, Seoul, Korea
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the methodology described in this paper.
Open Datasets Yes Datasets We evaluate our proposed methods on the two realworld datasets, REDD (Kolter and Johnson 2011) and UKDALE (Kelly and Knottenbelt 2014). We only used the last week data that was published after preprocessing1. 1http://jack-kelly.com/files/neuralnilm/Neural NILM data.zip
Dataset Splits No The paper describes training and test set splits (e.g., 'We used the data of house 2 6 as the training set, and house 1 as the test set'), but does not explicitly mention a separate validation set split or how it was derived for reproducibility.
Hardware Specification Yes The DNN models are trained on NVIDIA GTX 1080Ti and implemented using Tensor Flow 1.8 package.
Software Dependencies Yes The DNN models are trained on NVIDIA GTX 1080Ti and implemented using Tensor Flow 1.8 package.
Experiment Setup Yes Our model has the following hyperparameters. The learning rate is 1.0 10 4, and the batch size is 16. Data was sliced with additional window size w=400 and output sequence length s=64 for REDD, w=200 and s=32 for UK-DALE. We used Adam optimizer (Kingma and Ba 2015) for training.