Globally-Robust Neural Networks
Authors: Klas Leino, Zifan Wang, Matt Fredrikson
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present an empirical evaluation of our method. |
| Researcher Affiliation | Academia | Klas Leino 1 Zifan Wang 1 Matt Fredrikson 1 1Carnegie Mellon University, Pittsburgh, Pennsylvania, USA. |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. The methods are described in prose and mathematical formulations. |
| Open Source Code | Yes | An implementation of our approach is available on Git Hub1. 1Code available at https://github.com/klasleino/gloro |
| Open Datasets | Yes | MNIST (Le Cun et al., 2010), CIFAR-10 (Krizhevsky, 2009) and Tiny-Imagenet (Le & Yang, 2015) |
| Dataset Splits | No | The paper mentions training on datasets like MNIST, CIFAR-10, and Tiny-Imagenet and evaluating on the "entire test set," but does not explicitly provide the specific percentages or methodology used for training, validation, and test data splits. |
| Hardware Specification | Yes | All timings were taken on a machine using a Geforce RTX 3080 accelerator, 64 GB memory, and Intel i9 10850K CPU, with the exception of those for the KW (Wong et al., 2018) method, which were taken on a Titan RTX card for toolkit compatibility reasons. |
| Software Dependencies | No | The paper mentions using ART (Nicolae et al., 2019) for PGD attacks but does not provide specific version numbers for software dependencies like Python, deep learning frameworks (e.g., PyTorch, TensorFlow), or CUDA. |
| Experiment Setup | Yes | Further details on the precise hyperparameters used for training and attacks, the process for obtaining these parameters, and the network architectures are provided in Appendix B. |