Optimal Policy for Deployment of Machine Learning Models on Energy-Bounded Systems

Authors: Seyed Iman Mirzadeh, Hassan Ghasemzadeh

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that on the Image Net dataset, we can achieve a 20% energy reduction with only 0.3% accuracy drop compared to Squeeze-and-Excitation Networks. By performing comprehensive experiments on different machine learning tasks, we show the performance gain of our proposed solution.
Researcher Affiliation Academia Seyed Iman Mirzadeh and Hassan Ghasemzadeh Washington State University, USA {seyediman.mirzadeh, hassan.ghasemzadeh}@wsu.edu
Pseudocode No The paper describes mathematical formulations and algorithms but does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions using several open-source libraries (e.g., scikit-learn, TensorFlow, CVXPY, PyTorch) but does not provide a link or explicit statement about releasing the source code for the methodology developed in this paper.
Open Datasets Yes We used the Human Activity Recognition Using Smartphones (UCI-HAR) dataset [Anguita et al., 2013]... The Image Net classification dataset [Russakovsky et al., 2014] has 1.28 million training images and 50,000 validation images that includes of 1000 classes.
Dataset Splits Yes The Image Net classification dataset [Russakovsky et al., 2014] has 1.28 million training images and 50,000 validation images that includes of 1000 classes.
Hardware Specification Yes To measure the power consumption of different machine learning models with different implementations, we utilized Intel s Running Average Power Limit (RAPL) [Weaver et al., 2012] implemented in the Likwid library [Center, 2019]. RAPL allows us to monitor energy consumption on the CPU chip and the attached DRAM. For a fair comparison, we used only a single core and fixed the clock frequency at 1.5GHz for all our experiments.
Software Dependencies No The paper mentions software like 'scikitlearn', 'Tensorflow library', 'CVXPY library' with 'ECOS solver', and 'Pytorch framework', but none of these mentions include specific version numbers. For example, 'scikitlearn [Pedregosa et al., 2011]' only provides a citation year, not a version.
Experiment Setup Yes For the classification task on this dataset, we used the objective introduced in (1) where K is set to 1000 inferences, λ = 0.1 K = 100, and ui is set to the constant value of 1 to penalize selecting many model. Both neural networks were trained using the Adam Optimizer [Kingma and Ba, 2014] with Tensorflow library for 50 epochs with early stopping.