Online Restless Multi-Armed Bandits with Long-Term Fairness Constraints

Authors: Shufan Wang, Guojun Xiong, Jian Li

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results further demonstrate the effectiveness of our Fair-UCRL. In this section, we first evaluate the performance of Fair-UCRL in simulated environments, and then demonstrate the utility of Fair-UCRL by evaluating it under three real-world applications of RMAB.
Researcher Affiliation Academia Stony Brook University {shufan.wang, guojun.xiong, jian.li.3}@stonybrook.edu
Pseudocode Yes Algorithm 1: Fair-UCRL
Open Source Code No The paper does not provide any statement about releasing open-source code or a link to a code repository.
Open Datasets Yes We study the PASCAL recognizing textual entailment task as in Snow et al. (2008). We study the continuous positive airway pressure therapy (CPAP) as in Herlihy et al. (2023); Li and Varakantham (2022b). We study the land mobile satellite system problem as in Prieto-Cerdeira et al. (2010).
Dataset Splits No The paper describes an online learning setting with episodes, where the DM estimates transition kernels and reward functions by observing trajectories. It does not provide traditional train/validation/test splits for a static dataset.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks).
Experiment Setup Yes The activation budget is set to 100. The minimum activation fraction η is set to be 0.1, 0.2 and 0.3 for the three classes of arms, respectively. We set K = H = 160. We use Monte Carlo simulations with 1, 000 independent trials. The budget is B = 5 and the fairness constraint is set to be a random number between [0.1, 0.7].