EA-HAS-Bench: Energy-aware Hyperparameter and Architecture Search Benchmark
Authors: Shuguang Dou, XINYANG JIANG, Cai Rong Zhao, Dongsheng Li
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Specifically, we present the first large-scale energy-aware benchmark that allows studying Auto ML methods to achieve better trade-offs between performance and search energy consumption, named EA-HAS-Bench. EA-HAS-Bench provides a large-scale architecture/hyperparameter joint search space, covering diversified configurations related to energy consumption. Furthermore, we propose a novel surrogate model specially designed for large joint search space, which proposes a B ezier curve-based model to predict learning curves with unlimited shape and length. Based on the proposed dataset, we modify existing Auto ML algorithms to consider the search energy consumption, and our experiments show that the modified energy-aware Auto ML methods achieve a better trade-off between energy consumption and model performance. |
| Researcher Affiliation | Collaboration | Shuguang Dou1 , Xinyang Jiang2 , Cairong Zhao1 , Dongsheng Li2 1 Tongji University, 2 Microsoft Research Asia |
| Pseudocode | Yes | Listing 1: GPU Tracer |
| Open Source Code | Yes | The dataset and codebase of EA-HAS-Bench are available at https://github.com/microsoft/EA-HASBench. |
| Open Datasets | Yes | The sampled architecture and hyperparameter configurations are trained and evaluated on two of the most popular image classification datasets, namely CIFAR-10 (Krizhevsky et al., 2009) and Micro Image Net challenge s (Tiny Image Net) dataset (Le & Yang, 2015). |
| Dataset Splits | Yes | The sampled configurations on CIFAR10 and Tiny Image Net are split into training, validation, and testing sets containing 11597, 1288, and 1000 samples respectively. |
| Hardware Specification | Yes | Table 6: Details of the machines used to collect energy consumption Property Name Value CPU Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz 2600 MHz Memory-GB 112 Operation system Linux Ubuntu 20.04 LTS Hard drive-GB 1000 GPU Nvidia Tesla V100 with 32 GB memory |
| Software Dependencies | No | The paper includes code snippets using libraries such as 'pynvml' and 'torch', but it does not specify version numbers for these or any other software dependencies, such as 'LGB' or 'Light GBM'. |
| Experiment Setup | Yes | Experimental setup. Since EA-HAS-Bench focuses on the trade-off between model performance and search energy cost, in this experiment we use the total search energy cost as the resource limitation, instead of training time. As a result, we set the maximum search cost to roughly 40,000 k Wh for CIFAR 10 and 250,000k Wh for Tiny Image Net, which is equivalent to running a single-fidelity HPO algorithm for about 1,000 iterations. |