Energy-Efficient Scheduling with Predictions

Authors: Eric Balkanski, Noemie Perivier, Clifford Stein, Hao-Ting Wei

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we empirically demonstrate that this framework achieves an improved performance on real and synthetic datasets.
Researcher Affiliation Academia Eric Balkanski Columbia University eb3224@columbia.edu Noemie Perivier Columbia University np2708@columbia.edu Clifford Stein Columbia University cliff@ieor.columbia.edu Hao-Ting Wei Columbia University hw2738@columbia.edu
Pseudocode Yes Algorithm 1 Two-Phase Energy Efficient Scheduling (TPE)
Open Source Code No The paper does not provide any explicit statements about the release of source code or links to code repositories for the described methodology.
Open Datasets Yes We also evaluate the two algorithms on the College Message dataset from the SNAP database [26], where the scheduler must process messages that arrive over 9 days, each with between 300 and 500 messages.
Dataset Splits No The paper describes how synthetic data and predictions for real data are generated to evaluate the algorithm under different error parameters, but it does not specify train/validation/test dataset splits for model training, as the algorithm is not a machine learning model that undergoes a training phase with such splits.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments.
Software Dependencies No The paper does not specify any software names with version numbers that are critical for reproducing the experiments (e.g., programming languages, libraries, frameworks, or solvers with their specific versions).
Experiment Setup Yes Specifically, we consider the energy plus flow time minimization problem where F(S, J ) = Pj J cj rj and consider unit-work jobs (i.e., pj = 1 for all j) and fix α = 3. ... TPE-S is Algorithm 2 with the default setting λ = 0.02, ηshift = 1 and σ = 0.4, where σ is a parameter that controls the level of prediction error, that we call the error parameter. ... In all experiments, we use the values a = 100, M = 500.