Universal Approximation with Certified Networks

Authors: Maximilian Baader, Matthew Mirman, Martin Vechev

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove that for every continuous function f, there exists a network n such that: (i) n approximates f arbitrarily close, and (ii) simple interval bound propagation of a region B through n yields a result that is arbitrarily close to the optimal output of f on B. Our result can be seen as a Universal Approximation Theorem for interval-certified Re LU networks. To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks. ... we aim to answer an existential question and thus we focus on proving that a given network exists.
Researcher Affiliation Academia Maximilian Baader, Matthew Mirman, Martin Vechev Department of Computer Science ETH Zurich, Switzerland
Pseudocode No The paper provides mathematical definitions and proofs but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper is theoretical and focuses on proving the existence of networks; it does not mention releasing any source code for its methodology.
Open Datasets No The paper refers to datasets like MNIST and CIFAR10 for contextual discussion of certified robustness, but it does not conduct experiments using these datasets, nor does it provide access information for any dataset used in its own research.
Dataset Splits No The paper is theoretical and does not describe any experimental setup involving validation splits.
Hardware Specification No The paper is theoretical and does not describe any experimental setup, therefore no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe any experimental setup, therefore no software dependencies with version numbers are listed.
Experiment Setup No The paper is theoretical and does not describe any experimental setup, therefore no specific hyperparameter values or training configurations are mentioned.