Robust Implicit Networks via Non-Euclidean Contractions
Authors: Saber Jafarpour, Alexander Davydov, Anton Proskurnikov, Francesco Bullo
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we evaluate our framework in image classification through the MNIST and the CIFAR-10 datasets. Our numerical results demonstrate improved accuracy and robustness of the implicit models with smaller input-output Lipschitz bounds. |
| Researcher Affiliation | Academia | 1 Center for Control, Dynamical Systems and Computation, University of California, Santa Barbara, 93106-5070, USA, {saber, davydov, bullo}@ucsb.edu. 2 Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy; 3 Institute for Problems in Mechanical Engineering, Russian Academy of Sciences, St. Petersburg, Russia, anton.p.1982@ieee.org |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github. com/davydovalexander/Non-Euclidean_Mon_Op_Net. |
| Open Datasets | Yes | Finally, we evaluate our framework in image classification through the MNIST and the CIFAR-10 datasets. In the digit classification dataset MNIST... In the image classification dataset CIFAR-10... |
| Dataset Splits | No | The paper mentions 60000 training images and 10000 test images for MNIST, and 50000 training images and 10000 test images for CIFAR-10, but does not specify a separate validation set or its split details. |
| Hardware Specification | Yes | All models were trained using Google Colab with a Tesla P100-PCIE-16GB GPU. |
| Software Dependencies | No | The paper does not explicitly mention specific software dependencies with version numbers. |
| Experiment Setup | Yes | All models are of order n = 100, used the Re LU activation function φi(x) = (x)+, and are trained with a batch size of 300 over 10 epochs with a learning rate of 1.5 10 2. We train both models with a batch size of 256 and a learning rate of 10 3 for 40 epochs. |