Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
The Fundamental Limits of Neural Networks for Interval Certified Robustness
Authors: Matthew B Mirman, Maximilian Baader, Martin Vechev
TMLR 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper we present a fundamental result on the limitation of neural networks for interval analyzable robust classification. Our main theorem shows that non-invertible functions can not be built such that interval analysis is precise everywhere. Given this, we derive a paradox: while every dataset can be robustly classified, there are simple datasets that can not be provably robustly classified with interval analysis. Main contributions. In this paper, we present the first proofs capturing key limitations (incompleteness) of using Re LU based neural networks to build robust classifiers that can be certified with interval analysis: Fundamental Imprecision of Interval Analysis (Theorem 4.10) and Impossibility of Interval-Provably Robust Classifiers (Section 5). |
| Researcher Affiliation | Academia | Matthew Mirman EMAIL Maximilian Baader EMAIL Martin Vechev EMAIL Department of Computer Science, ETH Zurich |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. The methodological explanations are provided through formal definitions, lemmas, theorems, and mathematical proofs. |
| Open Source Code | No | The paper does not provide any concrete access information to source code, such as a repository link, an explicit code release statement, or mention of code in supplementary materials. |
| Open Datasets | No | The paper mentions CIFAR10 in the introduction to provide context regarding the state-of-the-art certified robust accuracy, but it does not use this dataset for experiments or provide access information. The theoretical proofs use abstract examples like 'simple datasets' or 'k flips' that do not require public access information. |
| Dataset Splits | No | The paper focuses on theoretical proofs and does not conduct empirical experiments using datasets, therefore no dataset split information (training/test/validation) is provided. |
| Hardware Specification | No | The paper is theoretical in nature and does not describe any experimental setup that would require hardware specifications. Therefore, no specific hardware details are mentioned. |
| Software Dependencies | No | The paper is purely theoretical and does not describe any implementation details or experiments that would require specific software dependencies with version numbers. |
| Experiment Setup | No | The paper focuses on theoretical analysis and proofs, and does not include any experimental setup details such as hyperparameters, training configurations, or system-level settings. |