Novel Quadratic Constraints for Extending LipSDP beyond Slope-Restricted Activations
Authors: Patricia Pauli, Aaron J Havens, Alexandre Araujo, Siddharth Garg, Farshad Khorrami, Frank Allgöwer, Bin Hu
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we illustrate the utility of our approach with a variety of experiments and show that our proposed SDPs generate less conservative Lipschitz bounds in comparison to existing approaches. |
| Researcher Affiliation | Academia | 1 Institute for Systems Theory and Automatic Control, University of Stuttgart 2 ECE & CSL, University of Illinois Urbana-Champaign 3 ECE, New York University |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code to reproduce all experiments is provided at https://github.com/ppauli/QCs_Max Min. |
| Open Datasets | Yes | we evaluate our approach on several Max Min neural networks trained on the MNIST dataset (Le Cun & Cortes, 2010) |
| Dataset Splits | No | The paper mentions training on the MNIST dataset but does not specify the training, validation, and test splits needed for reproduction. |
| Hardware Specification | No | The paper mentions running experiments "on a standard i7 note book", but this lacks specific model numbers for the CPU (e.g., i7-XXXX) or other hardware components to ensure reproducibility. |
| Software Dependencies | Yes | We solve problem (10) and an SDP that minimizes ρ subject to (15), using YALMIP (Löfberg, 2004) with the solver Mosek (MOSEK Ap S, 2020) in Matlab. |
| Experiment Setup | No | The paper states, "Following the same experimental setting as Wang et al. (2022)", but does not explicitly provide the specific hyperparameter values or training configurations in the main text of this paper. |