On the Role of Randomization in Adversarially Robust Classification
Authors: Lucas Gnecco Heredia, Muni Sreenivas Pydi, Laurent Meunier, Benjamin Negrevergne, Yann Chevaleyre
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we clarify the role of randomization in building adversarially robust classifiers. Given a base hypothesis set of deterministic classifiers, we show the conditions under which a randomized ensemble outperforms the hypothesis set in adversarial risk, extending previous results. Additionally, we show that for any probabilistic binary classifier (including randomized ensembles), there exists a deterministic classifier that outperforms it. |
| Researcher Affiliation | Collaboration | Lucas Gnecco Heredia1 Muni Sreenivas Pydi1 Laurent Meunier2 Benjamin Negrevergne1 Yann Chevaleyre1 1 CNRS, LAMSADE, Université Paris Dauphine PSL 2 Payflows |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described. |
| Open Datasets | No | The paper is theoretical and primarily uses conceptual 'toy examples' (e.g., 'Consider a discrete data distribution', 'Binary classification example with discrete data distribution') to illustrate theoretical points, rather than actual public datasets used for training models in experiments. No public dataset access information is provided. |
| Dataset Splits | No | The paper is theoretical and does not describe empirical experiments that would involve training, validation, or test dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not describe any hardware used for experiments. |
| Software Dependencies | No | The paper is theoretical and does not mention any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with specific hyperparameters, training configurations, or system-level settings. |