Fiduciary Bandits
Authors: Gal Bahar, Omer Ben-Porat, Kevin Leyton-Brown, Moshe Tennenholtz
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our main contribution is a positive result: an asymptotically optimal, incentive compatible, and ex-ante individually rational recommendation algorithm. The main technical contribution of this paper is an EAIR mechanism, which obtains the highest possible social welfare by any EAIR mechanism up to an additive factor of o( 1 n). Our analysis uses an intrinsic property of the setting, which is further elaborated in Theorem 1. |
| Researcher Affiliation | Academia | 1Technion Israel Institute of Technology 2University of British Columbia, Canada. |
| Pseudocode | Yes | Algorithm 1 Fiduciary Explore & Exploit (FEE) and Algorithm 2 IC Fiduciary Explore & Exploit (IC-FEE) |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | No | The paper describes a theoretical model and algorithms (FEE, IC-FEE) but does not involve empirical training or evaluation on a dataset. Therefore, no information about publicly available datasets is provided. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments with data. Therefore, no training/validation/test dataset splits are provided. |
| Hardware Specification | No | The paper is theoretical and does not describe empirical experiments. Therefore, no specific hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and presents algorithms without implementation details. Therefore, no specific software dependencies with version numbers are provided. |
| Experiment Setup | No | The paper is theoretical and describes algorithms with mathematical parameters, but it does not provide concrete experimental setup details such as hyperparameter values, training configurations, or system-level settings for empirical runs. |