Fundamental limits on the robustness of image classifiers

Authors: Zheng Dai, David Gifford

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove that image classifiers are fundamentally sensitive to small perturbations in their inputs. Specifically, we show that given some image space of n-by-n images, all but a tiny fraction of images in any image class induced over that space can be moved outside that class by adding some perturbation whose p-norm is O(n1/ max (p,1)), as long as that image class takes up at most half of the image space. We then show that O(n1/ max (p,1)) is asymptotically optimal.
Researcher Affiliation Academia Zheng Dai & David K. Gifford Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139, USA {zhengdai,gifford}@mit.edu
Pseudocode Yes Algorithm 1: Robust Classifier Input :An image I In,q,h,(b) Result: A label belonging to {0, 1} S 0; for x 1 to qn do for y 1 to n do for a 1 to h do S S + Ix,y,a; if S < n2qh/2 then return 0; else return 1;
Open Source Code No The paper does not provide any statement or link indicating that the source code for their described methodology is openly available.
Open Datasets Yes In detail, we took all the images within the training set of Imagenette, a subset of Imagenet consisting of 10 classes (tench, English springer, cassette player, chain saw, church, French horn, garbage truck, gas pump, golf ball, parachute)(Howard).
Dataset Splits No The paper mentions using 'all the images within the training set of Imagenette' for calculating average distances, but it does not specify any training, validation, or test dataset splits for model training or evaluation, as the paper is theoretical.
Hardware Specification No The paper does not provide specific hardware details (such as CPU/GPU models, memory, or accelerator types) used for any computations or analyses.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions).
Experiment Setup No The paper is theoretical and does not describe an experimental setup involving hyperparameters, training configurations, or system-level training settings.