Transferring Decomposed Tensors for Scalable Energy Breakdown Across Regions

Authors: Nipun Batra, Yiling Jia, Hongning Wang, Kamin Whitehouse

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on two U.S. cities having distinct weather from a publicly available dataset. We find that our approach gives better energy breakdown estimates requiring the least amount of instrumented homes from the target region, when compared to the state-of-the-art. Our main result in Figure 2 shows that our approach <TTF, Transfer> performs favourably when compared to all the other baselines on Austin to San Diego transfer.
Researcher Affiliation Academia Nipun Batra, Yiling Jia, Hongning Wang, Kamin Whitehouse University of Virginia
Pseudocode No The paper describes mathematical formulations and optimization procedures but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Our entire codebase, baselines, analysis and experiments can be found on Github (link anonymised for submission).
Open Datasets Yes We evaluate our approach on a publicly available dataset called Dataport (Parson et al. 2015).
Dataset Splits Yes We use nested-cross validation across all our baselines and our approach. For the outer loop (looping across homes), we use 10-fold cross validation. [...] For the inner loop, we use 2-fold cross-validation. The inner loop is used for parameter/hyperparameter fine tuning.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU/GPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions using 'Adagrad (Duchi, Hazan, and Singer 2011)', 'Autograd (Maclaurin, Duvenaud, and Adams 2015)', and 'CVXPy (Diamond and Boyd 2016)' but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes The set of parameters in TTF (both normal and transfer) are: number of home and season factors; and the hyperparameters are: the learning rate and the number of iterations. The candidate set of hyper-parameters for STF is the same as that of TTF. For the STF, there is only one parameter the rank (r). For the MF based baselines, we used the CVXPy (Diamond and Boyd 2016) based implementation used by the paper authors. Their implementation solved the MF problem via alternating least squares. The set of parameter for <MF, Transfer> and <MF, Normal> is the number of latent factors. The set of hyper-parameters is the number of iterations of the alternating least squares. Finally, the Frac(n) required in Eq (7) are used from Table 1. Figure 6 shows the distribution of the optimal set of parameters and hyper-parameters for transfer and normal learning for TTF.