Time Fairness in Online Knapsack Problems

Authors: Adam Lechowicz, Rik Sengupta, Bo Sun, Shahin Kamali, Mohammad Hajiesmaili

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 NUMERICAL EXPERIMENTS In this section, we present numerical experiments for OKP algorithms in the context of the online job scheduling problem from Example 1.1. We evaluate our proposed algorithms, including ECT and LA-ECT, against existing algorithms from the literature.
Researcher Affiliation Academia Adam Lechowicz University of Massachusetts Amherst alechowicz@cs.umass.edu Rik Sengupta University of Massachusetts Amherst rsengupta@cs.umass.edu Bo Sun University of Waterloo bo.sun@uwaterloo.ca Shahin Kamali York University kamalis@yorku.ca Mohammad Hajiesmaili University of Massachusetts Amherst hajiesmaili@cs.umass.edu
Pseudocode No The paper describes algorithms (e.g., ZCL, ECT, LA-ECT) and defines their threshold functions using mathematical formulas (e.g., 'Φα(z) = (Ue/L) z ℓ/1 ℓ(L/e)'). However, it does not include any formally structured pseudocode blocks or algorithms labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code Yes 1Our code is available at https://github.com/adamlechowicz/fair-knapsack.
Open Datasets No We conduct experiments on synthetic instances for OKP, which emulate a typical online cloud allocation task. (...) We generate value densities for each item in the range [U, L] according to a power-law distribution (giving relatively few jobs with very high value, and many items with average and low values). We consider a one-dimensional knapsack (i.e., server w/ single CPU) and set the capacity to 1. Weights are assigned uniformly randomly and are small compared to the total knapsack capacity (e.g., the maximum weight is 0.05). The paper states that synthetic instances are generated but does not provide specific access information (e.g., link, DOI, citation) or confirm public availability for these generated datasets.
Dataset Splits No The 'Setup' subsection in Section 5 describes how synthetic data is generated and its characteristics (e.g., power-law distribution for value densities, uniformly random weights). However, it does not specify any training, validation, or test dataset splits (e.g., percentages or sample counts for each split).
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as CPU models, GPU models, or memory specifications. It only mentions general aspects of the simulated environment like 'one-dimensional knapsack (i.e., server w/ single CPU)' which describes the problem setting, not the computational hardware.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9') that would be needed to reproduce the experiments. The code link is provided but the specific software environment is not described in the text.
Experiment Setup Yes Setup. To validate the performance of our algorithms and quantify the empirical trade-off between fairness and efficiency (resp. static and dynamic pricing), we conduct experiments on synthetic instances for OKP, which emulate a typical online cloud allocation task. We simulate different value density ratios U/L {100, 500, 2500} by setting L and U accordingly. We generate value densities for each item in the range [U, L] according to a power-law distribution (giving relatively few jobs with very high value, and many items with average and low values). We consider a one-dimensional knapsack (i.e., server w/ single CPU) and set the capacity to 1. Weights are assigned uniformly randomly and are small compared to the total knapsack capacity (e.g., the maximum weight is 0.05). We report the cumulative density functions of the empirical competitive ratios, which show the average and the worst-case performances of several algorithms as described below.