Query Complexity of Adversarial Attacks

Authors: Grzegorz Gluch, Rüdiger Urbanke

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We give a lower bound on that number of queries in terms of entropy of decision boundaries of the classifier. Using this result we analyze two classical learning algorithms on two synthetic tasks for which we prove meaningful security guarantees.
Researcher Affiliation Academia Grzegorz Głuch 1 Rudiger Urbanke 1 1School of Computer and Communication Sciences, EPFL, Switzerland. Correspondence to: Grzegorz Głuch <grzegorz.gluch@epfl.ch>.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating the availability of open-source code for the described methodology.
Open Datasets No The paper describes synthetic tasks and references a dataset from another paper ('well-known adversarial spheres distribution, introduced in the seminal paper Gilmer et al. (2018)') but does not provide concrete access information (link, DOI, repository) for these datasets within this paper.
Dataset Splits No The paper describes theoretical concepts and synthetic data generation but does not provide specific details about training, validation, or test dataset splits.
Hardware Specification No The paper does not mention any specific hardware (e.g., GPU, CPU models, or cloud computing specifications) used for running experiments, as its focus is theoretical.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe concrete experimental setup details, such as hyperparameter values or system-level training settings.