Adversarial Robustness without Adversarial Training: A Teacher-Guided Curriculum Learning Approach
Authors: Anindya Sarkar, Anirban Sarkar, Sowrya Gali, Vineeth N Balasubramanian
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We have performed extensive experimentation with CIFAR-10, CIFAR-100, and Tiny Image Net datasets and reported results against many popular strong adversarial attacks to prove the effectiveness of our method. |
| Researcher Affiliation | Academia | Anindya Sarkar Indian Institute of Technology, Hyderabad anindyasarkar.ece@gmail.com Anirban Sarkar * Indian Institute of Technology, Hyderabad cs16resch11006@iith.ac.in Sowrya Gali * Indian Institute of Technology, Hyderabad cs18btech11012@iith.ac.in Vineeth N Balasubramanian Indian Institute of Technology, Hyderabad vineethnb@iith.ac.in |
| Pseudocode | No | The paper describes its method using text and mathematical equations, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? Yes, in the supplementary material |
| Open Datasets | Yes | We have performed extensive experimentation with CIFAR-10, CIFAR-100, and Tiny Image Net datasets and reported results against many popular strong adversarial attacks to prove the effectiveness of our method. ... Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. |
| Dataset Splits | No | The paper mentions using CIFAR-10, CIFAR-100, and Tiny Image Net datasets and refers to a 'standard setup' for evaluation, but does not explicitly state the specific train/validation/test dataset splits (e.g., percentages or sample counts) within the main text. |
| Hardware Specification | No | The paper states that compute resources are detailed in the Appendix, but the provided text does not contain specific hardware details such as GPU or CPU models, or cloud instance types used for experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as library names with version numbers, used to replicate the experiment. |
| Experiment Setup | Yes | For CIFAR10, we continue first phase of training for 100 epochs and 15 epochs for each k during second phase upto k = 50. While we keep β = 2 throughout the training, γ is changed from 10 in first phase to 20 in second phase. Learning rate uniformly decays from 0.1 to 0.001 in first phase and from 0.001 to 0.0001 in second phase. For CIFAR100, we train with the same number of epochs as CIFAR-10 for both phases. While we keep β = 2 throughout the training, γ is changed from 10 in first phase to 15 in second phase. Learning rate is also considered same as the training with CIFAR-10. |