Scheduling jobs with stochastic holding costs

Authors: Dabeen Lee, Milan Vojnovic

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our numerical results demonstrate the effectiveness of our algorithm and show that our theoretical regret analysis is nearly tight. We run experiments to assess the numerical performance of the preemptive-then-nonpreemptive empirical cµ rule. We test the efficiency of Algorithm 1 compared to the preemptive empirical cµ rule and the nonpreemptive version. We also evaluate the tightness of the proposed upper and lower bounds on regret by measuring how the expected regret behaves as a function of parameters N and T for randomly generated instances.
Researcher Affiliation Academia Dabeen Lee Discrete Mathematics Group Institute for Basic Science (IBS) Daejeon, South Korea dabeenl@ibs.re.kr Milan Vojnovic Department of Statistics London School of Economics London, United Kingdom m.vojnovic@lse.ac.uk
Pseudocode Yes Algorithm 1 Preemptive-then-nonpreemptive empirical cµ rule; Algorithm 2 Preemptive-then-nonpreemptive empirical cµ rule for stochastic service times
Open Source Code Yes Our code for running experiments and obtained data are publicly available in https://github.com/learning-to-schedule/learning-to-schedule.
Open Datasets No The paper does not explicitly state using a publicly available or open dataset. It describes generating instances for experiments.
Dataset Splits No The paper does not provide specific training/test/validation dataset splits. It describes generating random instances for experiments.
Hardware Specification Yes All experiments are conducted on an Intel Core i5 3GHz processor with 6 cores and 32GB memory.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies.
Experiment Setup Yes For Figure 1, we used instances with N = 20, T = 2000, µi = 1 for i [N], and c1, . . . , c N being sampled from the uniform distribution on [0.5 ε, 0.5 + ε), where ε is a parameter that we vary. For each value of ε, we generated 100 instances... To examine how the expected regret of Algorithm 1 grows as a function of T, we test instances with N = 20 and different values of T from 20 to 1,000,000. To understand how the expected regret depends on N, we also run experiment with instances with T = 1000 and different values of N from 2 to 1000. For both experiments, we set µi = 1 for i [N] and ε = 0.001...