Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
On the Complexity of Differentially Private Best-Arm Identification with Fixed Confidence
Authors: Achraf Azize, Marc Jourdan, Aymen Al Marjani, Debabrota Basu
NeurIPS 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we provide an experimental analysis of Ada P-TT that validates our theoretical results. |
| Researcher Affiliation | Academia | Achraf Azize Équipe Scool, Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189CRISt AL F-59000 Lille, France EMAIL Marc Jourdan Équipe Scool, Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189CRISt AL F-59000 Lille, France EMAIL Aymen Al Marjani UMPA, ENS Lyon Lyon, France EMAIL Debabrota Basu Équipe Scool, Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189CRISt AL F-59000 Lille, France EMAIL |
| Pseudocode | Yes | Algorithm 1 Sequential interaction between a BAI strategy and users. ... Algorithm 2 Ada P-TT. Private statistics are in red. Changes due to privacy are in blue. |
| Open Source Code | No | The paper states 'We implement all the algorithms in Python (version 3.8)...' but does not include any statement about releasing the code for public access, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper defines the 'Bernoulli instances' (e.g., 'µ1 = (0.95, 0.9, 0.9, 0.9, 0.5)') by their parameters and cites a previous paper ([SS19]) that defines these instances. However, these are mathematical definitions of distributions for simulation, not references to a downloadable or formally cited 'dataset' in the conventional sense (e.g., a collection of data files). |
| Dataset Splits | No | The paper does not mention training, validation, or test dataset splits. The experimental setup involves simulating performance on predefined Bernoulli distributions rather than splitting a static dataset. |
| Hardware Specification | Yes | We implement all the algorithms in Python (version 3.8) and on an 8-core 64-bits Intel i5@1.6 GHz CPU. |
| Software Dependencies | Yes | We implement all the algorithms in Python (version 3.8)... |
| Experiment Setup | Yes | We set the risk δ = 10 2. We implement all the algorithms in Python (version 3.8) and on an 8-core 64-bits Intel i5@1.6 GHz CPU. We run each algorithm 100 times. |