Fixed-Budget Differentially Private Best Arm Identification
Authors: Zhirui Chen, P. N. Karthik, Yeow Meng Chee, Vincent Tan
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This section presents a numerical evaluation of our proposed DP-BAI policy on synthetic data, and compares it with BASELINE... We run experiments with several choices for the budget T and the privacy parameter ε, conducting 1000 independent trials for each pair of (T, ε) and reporting the fraction of trials in which the best arm is successfully identified. The experimental results are shown in Figure 1... |
| Researcher Affiliation | Academia | 1National University of Singapore 2Indian Institute of Technology, Hyderabad |
| Pseudocode | Yes | For pseudo-code of the DP-BAI policy, see Algorithm 1. Algorithm 1 Fixed-Budget Differentially Private Best Arm Identification (DP-BAI) Algorithm 2 DP-BAI-GAUSS |
| Open Source Code | No | The paper does not provide an explicit statement or a link for open-source code for the described methodology. |
| Open Datasets | No | Our synthetic instance is constructed as follows. We set K = 30, d = 2, and θ = [0.045 0.5] , a1 = [0 1] , a2 = [0 0.9] , a3 = [10 0] , and ai = [1 ωi] for all i {4, . . . , 30}, where ωi is randomly generated from a uniform distribution on the interval [0, 0.8]... In addition, we set νi, the reward distribution of arm i, to be the uniform distribution supported on [0, 2µi] for all i [K]. |
| Dataset Splits | No | The paper uses synthetic data and performs independent trials, which is a simulation approach rather than using predefined training/validation/test splits from a fixed dataset. Therefore, it does not provide specific dataset split information in the traditional sense. |
| Hardware Specification | No | The paper does not mention any specific hardware (e.g., GPU/CPU models, processors, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., libraries, frameworks, solvers). |
| Experiment Setup | Yes | Our synthetic instance is constructed as follows. We set K = 30, d = 2, and θ = [0.045 0.5] , a1 = [0 1] , a2 = [0 0.9] , a3 = [10 0] , and ai = [1 ωi] for all i {4, . . . , 30}, where ωi is randomly generated from a uniform distribution on the interval [0, 0.8]... In addition, we set νi, the reward distribution of arm i, to be the uniform distribution supported on [0, 2µi] for all i [K]. We run experiments with several choices for the budget T and the privacy parameter ε, conducting 1000 independent trials for each pair of (T, ε)... |