Learning from Distributed Users in Contextual Linear Bandits Without Sharing the Context
Authors: Osama Hanna, Lin Yang, Christina Fragouli
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A]. Also, from the abstract: "achieving nearly the same regret bound as if the contexts were directly observable. The former bound improves upon existing bounds by a log(T) factor, while the latter achieves information theoretical tightness." |
| Researcher Affiliation | Academia | Osama A. Hanna University of California, Los Angeles ohanna@ucla.edu Lin F. Yang University of California, Los Angeles linyang@ucla.edu Christina Fragouli University of California, Los Angeles christina.fragouli@ucla.edu |
| Pseudocode | Yes | Algorithm 1 Communication efficient for contextual linear bandits with known distribution (...) Algorithm 2 Communication efficient for contextual linear bandits with unknown distribution |
| Open Source Code | No | 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A]. The paper does not provide any links or explicit statements about open-source code for the described methodology. |
| Open Datasets | No | 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A]. The paper is theoretical and describes contexts being "generated from a distribution" but does not specify or provide access information for any publicly available or open dataset used for training. |
| Dataset Splits | No | The paper explicitly states "N/A" for running experiments in its checklist and does not discuss any training, validation, or test dataset splits. |
| Hardware Specification | No | The paper explicitly states "N/A" for running experiments in its checklist and does not mention any specific hardware used for computations or experiments. |
| Software Dependencies | No | The paper explicitly states "N/A" for running experiments in its checklist and does not list any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper explicitly states "N/A" for running experiments in its checklist and does not describe any experimental setup details such as hyperparameters or training configurations. |