Bandit Social Learning under Myopic Behavior
Authors: Kiarash Banihashem, MohammadTaghi Hajiaghayi, Suho Shin, Aleksandrs Slivkins
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We study social learning dynamics motivated by reviews on online platforms. The agents collectively follow a simple multi-armed bandit protocol, but each agent acts myopically, without regards to exploration. We allow a wide range of myopic behaviors that are consistent with (parameterized) confidence intervals for the arms expected rewards. We derive stark exploration failures for any such behavior, and provide matching positive results. As a special case, we obtain the first general results on failure of the greedy algorithm in bandits, thus providing a theoretical foundation for why bandit algorithms should explore. |
| Researcher Affiliation | Collaboration | Kiarash Banihashem University of Maryland, College Park kiarash@umd.edu Mohammad Taghi Hajiaghayi University of Maryland, College Park hajiagha@umd.edu Suho Shin University of Maryland, College Park suhoshin@umd.edu Aleksandrs Slivkins Microsoft Research NYC slivkins@microsoft.com |
| Pseudocode | Yes | Protocol 1: Bandit Social Learning Problem instance: two arms a [2] with (fixed, but unknown) mean rewards µ1, µ2 [0, 1] ; Initialization: hist { N0 samples of each arm }; for each round t = 1, 2, . . . , T do agent t arrives, observes hist and chooses an arm at [2] ; reward rt [ 0, 1 ] is drawn from Bernoulli distribution with mean µat; new datapoint (at, rt) is added to hist |
| Open Source Code | No | The paper is theoretical and does not present a new method or algorithm that would typically involve releasing source code. No statement or link regarding code availability is provided. |
| Open Datasets | No | The paper is theoretical and defines a model with "initial knowledge (a dataset with some samples)" as part of its theoretical framework, but it does not use a specific, publicly available dataset for empirical training or evaluation. |
| Dataset Splits | No | The paper is theoretical and does not conduct empirical experiments that would require dataset splits for training, validation, and testing. |
| Hardware Specification | No | The paper is theoretical and does not describe any hardware used for running experiments. |
| Software Dependencies | No | The paper is theoretical and does not mention any specific software dependencies with version numbers for experimental reproducibility. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with hyperparameters or training configurations. |