Exactly Computing the Local Lipschitz Constant of ReLU Networks

Authors: Matt Jordan, Alexandros G. Dimakis

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate our algorithm on various applications. We evaluate a variety of Lipschitz estimation techniques to definitively evaluate their relative error compared to the true Lipschitz constant. We apply our algorithm to yield reliable empirical insights about how changes in architecture and various regularization schemes affect the Lipschitz constants of Re LU networks.
Researcher Affiliation Academia Matt Jordan UT Austin mjordan@cs.utexas.edu Alexandros G. Dimakis UT Austin dimakis@austin.utexas.edu
Pseudocode No The paper describes the algorithmic steps in narrative text, for example, 'To put all the above components together, we summarize our algorithm.' However, it does not include a formally structured pseudocode block or an algorithm figure.
Open Source Code No The paper does not include an unambiguous statement about releasing source code for the methodology or provide a link to a code repository.
Open Datasets Yes We evaluate each technique over the unit hypercube across random networks, networks trained on synthetic datasets, and networks trained to distinguish between MNIST 1 s and 7 s.
Dataset Splits No The paper mentions evaluating techniques on 'random networks, networks trained on synthetic datasets, and networks trained to distinguish between MNIST 1 s and 7 s.' However, it does not provide specific dataset split information (e.g., percentages, sample counts, or explicit splitting methodology) for training, validation, and testing.
Hardware Specification No The acknowledgments mention 'computing resources from TACC', but the paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions 'Pytorch s automatic differentiation package' and 'Tensorflow', and refers to 'Mixed-Integer Program (MIP) solvers', but it does not provide specific software names with version numbers (e.g., 'PyTorch 1.9') for its ancillary software dependencies.
Experiment Setup No The paper states, 'Full descriptions of the computing environment and experimental details are contained in the supplementary.' However, the main text does not provide specific experimental setup details, such as concrete hyperparameter values, model initialization, or training configurations.