Self-Supervised Learning of Appliance Usage
Authors: Chen-Yu Hsu, Abbas Zeitoun, Guang-He Lee, Dina Katabi, Tommi Jaakkola
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate our design, we have created the first dataset with concurrent streams of home energy and location data, collected from 4 homes over a period of 7 months. For each home, data was collected for 2 to 4 months. Ground truth measurements are provided via smart plugs connected directly to each appliance. Compared to past work on unsupervised learning of appliance usage and a new baseline that leverages the two modalities, our method achieves significant improvements of 67.3% and 51.9% respectively for the average detection F1 score. |
| Researcher Affiliation | Academia | Chen-Yu Hsu, Abbas Zeitoun, Guang-He Lee, Dina Katabi & Tommi Jaakkola Computer Science and Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139, USA {cyhsu,zeitoun,guanghe,dk}@mit.edu, tommi@csail.mit.edu |
| Pseudocode | Yes | Algorithm 1 Clustering energy events with the learned cross-modal relations |
| Open Source Code | Yes | We will release our code and dataset to encourage future work on multi-modal models for understanding appliance usage patterns and the underlying user behavior. Project website: http://sapple.csail.mit.edu |
| Open Datasets | Yes | To evaluate our design, we have created the first dataset with concurrent streams of home energy and location data, collected from 4 homes over a period of 7 months. We will release our code and dataset to encourage future work on multi-modal models for understanding appliance usage patterns and the underlying user behavior. Project website: http://sapple.csail.mit.edu |
| Dataset Splits | No | The paper mentions 'training' but does not specify explicit dataset splits (e.g., percentages or counts for training, validation, and test sets), nor does it mention cross-validation. It only mentions 'The minimum predictability score ηs is chosen based on a validation set from one of the homes.' without detailing the split. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as CPU models, GPU models, or memory specifications. It only mentions sensor hardware for data collection (emon Pi, wireless location sensor, TP-Link smart plugs). |
| Software Dependencies | No | The paper states 'The neural networks are implemented in Tensorflow (Abadi et al., 2016). For training, we use the Adam (Kingma & Ba, 2014) optimizer with a learning rate of 0.001 and a batch size of 64.' While TensorFlow and Adam are mentioned, specific version numbers for these software dependencies are not provided in the text. |
| Experiment Setup | Yes | For training, we use the Adam (Kingma & Ba, 2014) optimizer with a learning rate of 0.001 and a batch size of 64. We choose the dimensions of zt,cat and zt,cont to be 128 and 3. The frames of location images for each time window have 32 32 32 pixels. We choose λ to be 0.1 in our experiments to put more emphasis on the location prediction. In all experiments, we set ηDloc = 0.4 meters, ηz = 0.03, ηs = 0.2, and Nmin = 10. |