Achieving Fairness in the Stochastic Multi-Armed Bandit Problem
Authors: Vishakha Patil, Ganesh Ghalme, Vineet Nair, Y. Narahari5379-5386
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conclude by experimentally validating our theoretical results. ... We conclude by providing detailed experimental results to validate our theoretical guarantees in Section 6. |
| Researcher Affiliation | Academia | Vishakha Patil, Ganesh Ghalme, Vineet Nair, Y. Narahari Indian Institute of Science, Bengaluru, India {patilv, ganeshg, vineet, narahari}@iisc.ac.in |
| Pseudocode | Yes | Algorithm 1: FAIR-LEARN |
| Open Source Code | No | The paper does not provide any statement or link indicating the availability of open-source code for the described methodology. |
| Open Datasets | No | The paper describes a theoretical bandit problem setup and uses specific parameters for simulations (e.g., k, μ, r) rather than pre-existing, publicly available datasets for training. Thus, there is no 'dataset' in the traditional sense for which access information would be provided. |
| Dataset Splits | No | The paper refers to 'train', 'validation', and 'test' in the context of the bandit algorithm's internal process and regret analysis, not as explicit dataset splits for empirical evaluation. No specific dataset splits are mentioned. |
| Hardware Specification | No | The paper does not provide any specific hardware details used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | We consider the following FAIR-MAB instance: k = 10, μ1 = 0.8, and μi = μ1 Δi, where Δi = 0.01i, and r = (0.05, 0.05, . . . , 0.05) [0, 1]k. We show the results for T = 106. ... We consider an instance with k = 3, μ = (0.7, 0.5, 0.4), r = (0.2, 0.3, 0.25) and, α = 0. |