Group-Fair Online Allocation in Continuous Time

Authors: Semih Cayci, Swati Gupta, Atilla Eryilmaz

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Simulations We implemented the OLUM Algorithm on a fair resource allocation problem with K = 2 groups. In the application domains that we considered in Section 2, the task completion times naturally follow a power-law distribution. For the contractual online hiring setting, creativity of individuals has been shown to follow a Pareto(1, γ) distribution with exponent γ > 1, where γ is dependent on the field of expertise [37]. Motivated by this application, we consider the following group statistics: Group 1: Xk,n Pareto(1, 1.2) and Rk,n(t) = X0.6 k,n I{Xk,n t} Group 2: Xk,n Pareto(1, 1.4) and Rk,n(t) = X0.2 k,n I{Xk,n t} The reward per processing time as a function of the deadline is shown in Figure 2. For this setting, we implemented the OLUM Algorithm with parameter V = 20, and considered α-fair resource allocation problems with various α values. In Figure 2, we present the simulation results for ϕ2, i.e., the average fraction of time budget B allocated to Group-2 individuals, under the OLUM Algorithm. For these experiments, we chose wk = 1 for k = 1, 2 and ran the OLUM Algorithm for 1000 trials for each set.
Researcher Affiliation Academia Semih Cayci scayci@illinois.edu Swati Gupta swatig@gatech.edu Atilla Eryilmaz eryilmaz.2@osu.edu Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL 61801 H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332 Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210
Pseudocode Yes Definition 4 (OLUM Algorithm). For any k, let Qk,0 = 1 and Qk,i be defined recursively as follows: Qk,i+1 = Qk,i + γk(i) min{XGi,i, Ti} Rk,i(Ti)I{Gi = k} + , i > 0 (6) where the auxiliary variable γk(i) = U k 1 Qk,i/V , where V > 0 is a design choice. Then, for the task n, the OLUM Algorithm, denoted by πOLUM, makes the following decision: (Gn, Tn) arg max (k,t) [K] T bθk,n τ(t)Qk,n bµk,n τ(t) . Upon observing the corresponding feedback, the controller updates Qk,n+1 via (6).
Open Source Code No The paper does not contain an explicit statement or link indicating that the source code for the methodology described is publicly available.
Open Datasets No The paper uses simulated data generated from specified Pareto distributions with defined parameters (e.g., 'Xk,n Pareto(1, 1.2)') rather than an external publicly available dataset with a direct link or citation.
Dataset Splits No The paper describes simulation experiments for an online learning algorithm but does not specify traditional training, validation, and test dataset splits as it operates in a continuous-time online setting with generated data.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the simulations or experiments.
Software Dependencies No The paper mentions implementing the OLUM Algorithm and using Pareto distributions, but it does not list specific software dependencies with version numbers for reproducibility (e.g., programming languages, libraries, or simulation software versions).
Experiment Setup Yes For this setting, we implemented the OLUM Algorithm with parameter V = 20, and considered α-fair resource allocation problems with various α values. In Figure 2, we present the simulation results for ϕ2, i.e., the average fraction of time budget B allocated to Group-2 individuals, under the OLUM Algorithm. For these experiments, we chose wk = 1 for k = 1, 2 and ran the OLUM Algorithm for 1000 trials for each set. ... For this example, we consider β = (0, 0.125, 0.25, 0.375, 0.5) and ρ = (3.0, 2.0, 3.0, 1.0, 1.44).