Multiple-Profile Prediction-of-Use Games

Authors: Andrew Perrault, Craig Boutilier

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our approach with experimental results using utility models learned from real electricity use data.
Researcher Affiliation Collaboration Andrew Perrault and Craig Boutilier Department of Computer Science University of Toronto {perrault, cebly}@cs.toronto.edu Author now at Google Research, Mountain View, CA.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions using TensorFlow and that TensorFlow software is available from tensorflow.org, but it does not state that the authors' own code for their methodology is open-source or provide a link to it.
Open Datasets Yes We experimentally validate our techniques, using household utility functions that we learn (via structured prediction) from publicly-available electricity use data. We find that the MPOU model provides a gain of 3-5% over a fixed-rate tariff across several test scenarios, while a POU tariff without consumer coordination can result in losses of up to 30%. These experiments represent the first study of the welfare consequences of POU tariffs.
Dataset Splits No The paper states, 'We split the data into 80% train and 20% test for each household.' However, it does not explicitly mention a validation split.
Hardware Specification Yes Each instance took around 3 minutes on a single thread of 2.6 Ghz Intel i7, 8 GB RAM.
Software Dependencies No The paper mentions using 'TensorFlow' and the 'Adam optimizer' and 'Dropout', but it does not specify version numbers for these software components, which is required for a reproducible description of ancillary software.
Experiment Setup Yes We represent zp0q i , zp1q i and zp2q i in fully-connected singlelayer neural networks, each with 10 hidden units and Re LU activations, and train the model with backpropagation. We implement the model in Tensor Flow [Abadi et al., 2015] using the squared error loss function and the Adam optimizer [Kingma and Ba, 2015]. We use Dropout [Srivastava et al., 2014] with a probability of 0.7 on each hidden unit.