Learning Augmented Energy Minimization via Speed Scaling

Authors: Etienne Bamas, Andreas Maggiori, Lars Rohwedder, Ola Svensson

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we will test the LAS algorithm on both synthetic and real datasets. We will calculate the competitive ratios with respect to the offline optimum.
Researcher Affiliation Academia Etienne Bamas EPFL Switzerland etienne.bamas@epfl.ch Andreas Maggiori EPFL Switzerland andreas.maggiori@epfl.ch Lars Rohwedder EPFL Switzerland lars.rohwedder@epfl.ch Ola Svensson EPFL Switzerland ola.svensson@epfl.ch
Pseudocode Yes Algorithm 1 LEARNING AUGMENTED SCHEDULING (LAS) Input: T, D, and wpred initially and wreal in an online fashion Output: A feasible schedule (si)T D i=0 Let δ > 0 with 1+δ 1 δ α = 1 + ε. Compute optimal offline schedule for (wpred, T, (1 δ)D) where the jobs wpred i are run at uniform speeds ci an disjoint intervals [ai, bi] using [17].
Open Source Code Yes We note that the code is publicly available at https://github.com/andreasr27/LAS.
Open Datasets Yes Real dataset. We provide additional evidence that the LAS algorithm outperforms purely online algorithms by conducting experiments on the login requests to Bright Kite [5]
Dataset Splits No The paper uses synthetic and real datasets but does not explicitly provide details on train/validation/test splits with specific percentages, counts, or a detailed splitting methodology for their experiments. For the real dataset, it describes using 'access patterns of the previous day as a prediction for the current day' which is a form of temporal split for the input, not a standard training/validation/testing split for model evaluation.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions that 'the code is publicly available' but does not list specific software dependencies with version numbers (e.g., Python version, library versions like TensorFlow, PyTorch, scikit-learn).
Experiment Setup Yes We fix α = 3 in all our experiments as this value models the power consumption of modern processors (see Bansal et al. [2]). For artificial datasets, 'We used m = 20, M = 80, s = 5, T = 220 and D = 20.' For the real dataset, 'The timeline was discretized in chunks of ten minutes and D was set to 20.' The paper also discusses performance for different values of ε (e.g., 'ε = 0.01' and 'ε = 0.8').