Latent Bayesian melding for integrating individual and population models

Authors: Mingjun Zhong, Nigel Goddard, Charles Sutton

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 7 Experimental results
Researcher Affiliation Academia Mingjun Zhong, Nigel Goddard, Charles Sutton School of Informatics University of Edinburgh United Kingdom {mzhong,nigel.goddard,csutton}@inf.ed.ac.uk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a specific repository link or an explicit statement about the release of its source code.
Open Datasets Yes We apply AFHMM, AFHMM+PR and AFHMM+LBM to the Household Electricity Survey (HES) data1. This data set was gathered in a recent study commissioned by the UK Department of Food and Rural Affairs. The study monitored 251 households, selected to be representative of the population, across England from May 2010 to July 2011 [27]. Individual appliances were monitored, and in some households the overall electricity consumption was also monitored. The data were monitored 1The HES dataset and information on how the raw data was cleaned can be found from https://www.gov.uk/government/publications/household-electricity-survey. ... We then apply the models to the UK-DALE dataset [13], which was also gathered from UK households
Dataset Splits No The paper mentions using "training data" for model parameters and specific durations for testing (e.g., "one day s usage was used as test data", "10 days data for each house", "a month for analysis"), but it does not provide explicit training, validation, and test dataset splits with percentages, sample counts, or references to predefined splits for overall experimental reproduction.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory used for running its experiments.
Software Dependencies Yes The paper mentions "MOSEK Ap S. The MOSEK optimization toolbox for Python manual. Version 7.1 (Revision 28), 2015." used for solving the convex quadratic program.
Experiment Setup Yes We assume they are known and can be learned from the training data. Note that we assumed the HMMs have 3 states for all the appliances. ... We model ξi with a discrete distribution such that P(ξi) = QCi c=1 pξic ic where pic represents the prior probability of the number of cycles for the appliance i, which can be obtained from the training data. ... The constraints for those variables are represented as sets QS = n PKi k=1 Sitk = 1, Sitk [0, 1], i, t o , Qξ = n PCi c=1 ξic = 1, ξic [0, 1], i o , QH,S = n PKi l=1 Hit l. = ST i,t 1, PKi l=1 Hit .l = Sit, hit jk [0, 1], i, t o , and QU,Σ = U 0, Σ 0, σ2 im < ˆσ2 im, i, m . ... We optimize Σ while fixing all the other variables, and then optimize all the other variables simultaneously while fixing Σ. This optimization problem is then a convex quadratic program (CQP), for which we use MOSEK [2].