Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Stochastic Bandits for Egalitarian Assignment
Authors: Eugene Lim, Vincent Y. F. Tan, Harold Soh
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present the results of our numerical experiments to validate the analysis of Egal UCB. These experiments include both a synthetic environment (Section 7.1) and real-world datasets (Sections 7.2 and 7.3). |
| Researcher Affiliation | Academia | Eugene Lim EMAIL Department of Computer Science, National University of Singapore Vincent Y. F. Tan EMAIL Department of Mathematics, Department of Electrical and Computer Engineering, National University of Singapore Harold Soh EMAIL Department of Computer Science, National University of Singapore |
| Pseudocode | Yes | Algorithm 1: Egal UCB 1 let number of blocks Ba,0 = 0, cumulative reward Sa,0 = 0, and UCBa,0 = for each a [K] 2 for b = 1, 2, . . . , T/U do 3 let Ab U [K] be a set of U arms with highest UCBa,b 1 4 play arms Ab in a round robin fashion for the next U steps 5 update Ba,b and Sa,b U accordingly for each a Ab 6 update UCBa,b = Sa,b U Ba,b U for each a Ab |
| Open Source Code | Yes | The code for these experiments can be found in the supplementary materials. |
| Open Datasets | Yes | The Google Cluster Usage Traces dataset comprises 2.4 Ti B of compressed traces that record the workloads executed on Google compute cells (Wilkes, 2020). The Movie Lens 25M dataset is widely used in recommender systems (Kużelewska, 2014; Forouzandeh et al., 2021) and collaborative filtering (He et al., 2017; Álvaro González et al., 2022) research. |
| Dataset Splits | No | The paper describes selecting a subset of data (e.g., "initial 4 million entries" or "randomly select K = 500 movies") and running the policy over a fixed time horizon, but it does not specify explicit training/test/validation splits for the data. The nature of the bandit problem involves online learning rather than traditional dataset splitting. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models or memory amounts used for running the experiments. It mentions running experiments on 'simulated data' and 'real-world data' but no specific computing resources. |
| Software Dependencies | No | The paper does not specify any software names with version numbers for implementation (e.g., Python, PyTorch, TensorFlow, or specific libraries). It only mentions algorithms like UCB1 and MOSS conceptually. |
| Experiment Setup | Yes | In each environment, we fixed K = 10 and varied U from 1 to 5. For each choice of U, we ran the experiment 30 times. In each run, we randomly generated the ground truth expected reward µa for each arm a using a uniform distribution with support [0.01, 0.99]. We fixed K = 210 and T = 218 while varying U {21, . . . , 28}. We ran Egal UCB on instances of Bernoulli Egal MAB where µa = 0.8 if a [U] and 0.5 otherwise. We pick K = 100 machines that contain the most amount of trace entries. Then, we formulate scenarios involving U {5, 10, 15, 20, 25} unseen users. At each time step t [T] where T = 150,000, we employ the Egal UCB policy to assign machines to these users. To adapt to the Egal MAB setting, we randomly select K = 500 movies and treat them as arms. ... Then, we formulate a scenario involving U {10, 20, 30, 40, 50} unseen users. At each time step t [T] where T = 150, 000, we employ the Egal UCB policy to assign movies to these users. |