Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation

Authors: Zhouxing Shi, Yihan Wang, Huan Zhang, J. Zico Kolter, Cho-Jui Hsieh

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that on tiny models, our method produces comparable bounds compared to exact methods that cannot scale to slightly larger models; on larger models, our method efficiently produces tighter results than existing relaxed or naive methods, and our method scales to much larger practical models that previous works could not handle.
Researcher Affiliation Collaboration Zhouxing Shi1, Yihan Wang1, Huan Zhang2, Zico Kolter2,3, Cho-Jui Hsieh1 1University of California, Los Angeles 2Carnegie Mellon University 3Bosch Center for AI
Pseudocode No The paper describes its methodology in text and figures but does not include structured pseudocode or algorithm blocks with explicit labels like 'Pseudocode' or 'Algorithm'.
Open Source Code Yes Code is available at https: //github.com/shizhouxing/Local-Lipschitz-Constants.
Open Datasets Yes We conduct experiments on image datasets including MNIST [30], CIFAR-10 [27], and Tiny Imagenet [29].
Dataset Splits No The paper mentions evaluating on a 'test set' but does not explicitly provide details about a validation dataset split (e.g., percentages, sample counts, or a specific citation for validation splits).
Hardware Specification No The paper does not explicitly provide specific hardware details (e.g., exact GPU/CPU models, memory, or cloud provider specifications) used for running its experiments.
Software Dependencies No The paper mentions tools like 'PyTorch' (Appendix A.1) but does not provide specific version numbers for any software components, libraries, or solvers used in the experiments.
Experiment Setup No The paper mentions timeout settings for baselines ('We set a timeout of 1000s for Lip MIP and Lip SDP, and 60s for Ba B.') and refers to prior work for training details ('We follow Jordan & Dimakis [24] and train several small models on a synthetic dataset.'), but it does not explicitly list specific hyperparameters such as learning rate, batch size, or optimizer settings for its own models.