Sequence-to-Point Learning With Neural Networks for Non-Intrusive Load Monitoring
Authors: Chaoyun Zhang, Mingjun Zhong, Zongzuo Wang, Nigel Goddard, Charles Sutton
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We applied the proposed neural network approaches to real-world household energy data, and show that the methods achieve state-of-the-art performance, improving two standard error measures by 84% and 92%. |
| Researcher Affiliation | Academia | 1School of Informatics, University of Edinburgh, United Kingdom {chaoyun.zhang,nigel.goddard,c.sutton}@ed.ac.uk 2School of Computer Science, University of Lincoln, United Kingdom mzhong@lincoln.ac.uk |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states the models are implemented in Python using TensorFlow but does not provide specific access information or links to the source code for the methodology described. |
| Open Datasets | Yes | We report results on the UK-DALE (Kelly and Knottenbelt 2015b) and REDD (Kolter and Johnson 2011) data sets, which measured the domestic appliance-level energy consumption and whole-house energy usage of five UK houses and six US houses respectively. |
| Dataset Splits | No | The paper describes training and test splits for the datasets (e.g., 'houses 1, 3, 4, and 5 for training the neural networks, and house 2 as the test data' for UK-DALE), but does not explicitly provide details for a separate validation split. |
| Hardware Specification | Yes | The networks were trained on machines with NVIDIA GTX 970 and NVIDIA GTX TITAN X GPUs. |
| Software Dependencies | No | The paper states the models are implemented in Python using TensorFlow, but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | A window of the mains was used as the input sequence; the window length for each appliance is shown in Table 1. The training windows were obtained by sliding the mains (input) and appliance (output) readings one timestep at a time... Both the input windows and targets were preprocessed by subtracting the mean values and dividing by the standard deviations (see these parameters in Table 1). |