Lipschitz regularity of deep neural networks: analysis and efficient estimation
Authors: Aladin Virmaux, Kevin Scaman
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that Seq Lip can significantly improve on the existing upper bounds. Finally, we provide an implementation of Auto Lip in the Py Torch environment that may be used to better estimate the robustness of a given neural network to small perturbations or regularize it using more precise Lipschitz estimations. |
| Researcher Affiliation | Industry | Kevin Scaman Huawei Noah s Ark Lab kevin.scaman@huawei.com Aladin Virmaux Huawei Noah s Ark Lab aladin.virmaux@huawei.com |
| Pseudocode | Yes | Algorithm 1 Auto Lip Input: function f : Rn Rm and its computation graph (g1, ..., g K) Output: upper bound on the Lipschitz constant: ˆLAL L(f) ... Algorithm 2 Auto Grad compliant power method Input: affine function f : Rn Rm, number of iteration N Output: approximation of the Lipschitz constant L(f) |
| Open Source Code | Yes | The code used in this paper is available at https://github.com/avirmaux/lip Estimation. |
| Open Datasets | Yes | CNN. We construct simple CNNs with increasing number of layers that we train independently on the MNIST dataset [29]. The details of the structure of the CNNs are given in the supplementary material. |
| Dataset Splits | No | The paper states that MLPs were trained on a synthetic dataset and CNNs on MNIST, but it does not provide specific details about training, validation, and test splits (e.g., percentages or counts) within the main text. It mentions using 'standard benchmark splits' implicitly for MNIST but doesn't specify them directly. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU models, CPU types, memory amounts) used for running the experiments. It only mentions general setups like "implemented in the Py Torch environment" but no hardware. |
| Software Dependencies | No | The paper mentions PyTorch [21] and TensorFlow [23] but does not specify exact version numbers for these or any other software dependencies. For example, it states: "Efficient implementations of backpropagation in modern deep learning libraries such as Py Torch [21] or Tensor Flow [23] rely on on the concept of automatic differentiation [24, 20]." |
| Experiment Setup | No | The paper states that MLPs were trained "with MSE loss and Re LU activations" and CNNs on MNIST dataset, but it does not provide specific hyperparameters such as learning rate, batch size, or number of epochs in the main text. It refers to "The details of the structure of the CNNs are given in the supplementary material" which may contain some setup details, but they are not in the main paper. |