Factored-Reward Bandits with Intermediate Observations
Authors: Marco Mussi, Simone Drago, Marcello Restelli, Alberto Maria Metelli
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical simulations are provided in Appendix E. |
| Researcher Affiliation | Academia | 1Politecnico di Milano, Milan, Italy. |
| Pseudocode | Yes | Algorithm 1: F-UCB. |
| Open Source Code | Yes | The code of the experiments can be found at https://github.com/marcomussi/FRB. |
| Open Datasets | No | The paper generates synthetic data for its experiments, describing the process: 'We draw the expected values µi,j for i P Jd K and j P Jk K from a uniform distribution in the range r0.7, 1s.' It does not use a pre-existing public dataset or provide specific access information for the generated data. |
| Dataset Splits | No | The paper describes experimental settings but does not specify explicit training, validation, or test dataset splits (e.g., percentages or sample counts). It refers to the 'learning horizon T' but not data partitioning. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, memory, or cloud computing resources used for running the experiments. |
| Software Dependencies | No | The paper describes algorithms and numerical simulations but does not list any specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | Setting For the sake of simplicity in the presentation of the results, we consider the scenario in which all the problem dimensions present the same number of actions (i.e., k1 kd : k). Moreover, we consider the setting in which the intermediate observations are drawn from Gaussian distributions with mean µi,aiptq for every action component aiptq in position i of the action vector a, formally xiptq Npµi,aiptq, σ2q, @i P Jd K. We consider values of k P J3, 5K, and values of d P J4K. We draw the expected values µi,j for i P Jd K and j P Jk K from a uniform distribution in the range r0.7, 1s. We fix a value of σ 0.1. ... We evaluate the performances in terms of cumulative regret with T 104, averaged over 50 trials. |