Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Adversarial Robustness without Adversarial Training: A Teacher-Guided Curriculum Learning Approach

Authors: Anindya Sarkar, Anirban Sarkar, Sowrya Gali, Vineeth N Balasubramanian

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have performed extensive experimentation with CIFAR-10, CIFAR-100, and Tiny Image Net datasets and reported results against many popular strong adversarial attacks to prove the effectiveness of our method.
Researcher Affiliation Academia Anindya Sarkar Indian Institute of Technology, Hyderabad EMAIL Anirban Sarkar * Indian Institute of Technology, Hyderabad EMAIL Sowrya Gali * Indian Institute of Technology, Hyderabad EMAIL Vineeth N Balasubramanian Indian Institute of Technology, Hyderabad EMAIL
Pseudocode No The paper describes its method using text and mathematical equations, but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? Yes, in the supplementary material
Open Datasets Yes We have performed extensive experimentation with CIFAR-10, CIFAR-100, and Tiny Image Net datasets and reported results against many popular strong adversarial attacks to prove the effectiveness of our method. ... Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
Dataset Splits No The paper mentions using CIFAR-10, CIFAR-100, and Tiny Image Net datasets and refers to a 'standard setup' for evaluation, but does not explicitly state the specific train/validation/test dataset splits (e.g., percentages or sample counts) within the main text.
Hardware Specification No The paper states that compute resources are detailed in the Appendix, but the provided text does not contain specific hardware details such as GPU or CPU models, or cloud instance types used for experiments.
Software Dependencies No The paper does not provide specific software dependency details, such as library names with version numbers, used to replicate the experiment.
Experiment Setup Yes For CIFAR10, we continue first phase of training for 100 epochs and 15 epochs for each k during second phase upto k = 50. While we keep β = 2 throughout the training, γ is changed from 10 in first phase to 20 in second phase. Learning rate uniformly decays from 0.1 to 0.001 in first phase and from 0.001 to 0.0001 in second phase. For CIFAR100, we train with the same number of epochs as CIFAR-10 for both phases. While we keep β = 2 throughout the training, γ is changed from 10 in first phase to 15 in second phase. Learning rate is also considered same as the training with CIFAR-10.