Adversarial robustness via robust low rank representations
Authors: Pranjal Awasthi, Himanshu Jain, Ankit Singh Rawat, Aravindan Vijayaraghavan
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate the improvements obtained by our approach on image data in Section 2. Empirical Evaluation. We compare Algorithm 1 with the algorithm of [12] for various values of σ and ε (used for training to optimize (7)). We train a Res Net-32 network on the CIFAR-10 dataset by optimizing (7). In Figure 2 we present the result of our training procedure for various values of ε and σ and compare with the ℓ2 smoothing method of [12] on the CIFAR-10 and CIFAR-100 datasets. We evaluate our approach on the CIFAR-10 and CIFAR-100 datasets. |
| Researcher Affiliation | Collaboration | Pranjal Awasthi Google Research and Rutgers University. Himanshu Jain Google Research. Ankit Singh Rawat Google Research. Aravindan Vijayaraghavan Northwestern University. |
| Pseudocode | Yes | Algorithm 1 Adversarial training via projections. Algorithm 2 Fast Certification of ℓ1 norm and Quadratic Programming. |
| Open Source Code | No | No explicit statement or link providing access to the open-source code for the methodology described in this paper. |
| Open Datasets | Yes | We train a Res Net-32 network on the CIFAR-10 dataset by optimizing (7). In Figure 2 we present the result of our training procedure for various values of ε and σ and compare with the ℓ2 smoothing method of [12] on the CIFAR-10 and CIFAR-100 datasets. |
| Dataset Splits | No | No explicit details on train/validation/test dataset splits (e.g., percentages, sample counts, or specific split files) are provided in the paper. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are provided in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8') are provided in the paper. |
| Experiment Setup | Yes | We compare Algorithm 1 with the algorithm of [12] for various values of σ and ε (used for training to optimize (7)). See Appendix B for a description of the hyperparameters and additional experiments. Following [6, 12], the inner maximization of finding adversarial perturbations is solved via projected gradient descent (PGD), and given the adversarial perturbations, the outer minimization uses stochastic gradient descent. |