Online Task Assignment with Controllable Processing Time

Authors: Ruoyu Wu, Wei Bao, Liming Ge

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate OMLA against five benchmarks...We generate the synthetic data set in the experiment...In Figure 1, we investigate the ratio between the online performance of OMLA and LP(Off)...In Figure 2, we compare the performance of OMLA with benchmarks.
Researcher Affiliation Academia Ruoyu Wu , Wei Bao , Liming Ge School of Computer Science, The University of Sydney ruwu6940@uni.sydney.edu.au, {wei.bao, liming.ge}@sydney.edu.au
Pseudocode Yes Algorithm 1 OMLA Algorithm; Algorithm 2 Calculation of Activation and Baseline Values
Open Source Code No The paper does not provide an explicit statement about the release of source code for the described methodology or a link to a code repository.
Open Datasets No We generate the synthetic data set in the experiment. (The approach was also adopted in [Sumita et al., 2022].)
Dataset Splits No The paper describes generating synthetic data for experiments and running multiple rounds of experiments with randomly generated task sequences, but it does not specify explicit training, validation, and testing dataset splits with percentages or counts.
Hardware Specification No The paper describes the experimental setup but does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory.
Software Dependencies No The paper does not list any specific software dependencies with version numbers, such as programming languages, libraries, or solvers used for implementation or experimentation.
Experiment Setup Yes We set |U| = 10, |V | = 25, and T = 100. For each e E, we set qe U(0.5, 1) and ru,v,l U(a l0.2, a l0.4), where a U(0.5, 1). For each l L we set the distribution Dl as a binomial distribution B(T, l1.2/20). For settings (a), (b) in Figure 2 and (a) (b) in Figure 3, u is drawn uniformly from [ ]. We set the rejection penalty for level l as θl = l + 2.