Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Adversarially Robust Few-Shot Learning: A Meta-Learning Approach

Authors: Micah Goldblum, Liam Fowl, Tom Goldstein

NeurIPS 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Model Anat Aadv Naturally Trained R2-D2 73.01 % 0.13 0.00 % 0.13 AT transfer (R2-D2 backbone) 39.13 % 0.13 25.33% 0.13 ADML 47.75% 0.13 18.49 % 0.13 AQ R2-D2 (ours) 57.87% 0.13 31.52% 0.13 Table 1: R2-D2 [3], adversarially trained transfer learning, ADML [33], and our adversarially queried (AQ) R2-D2 model on 5-shot Mini-Image Net.
Researcher Affiliation Academia Micah Goldblum Department of Mathematics University of Maryland EMAIL Liam Fowl Department of Mathematics University of Maryland EMAIL Tom Goldstein Department of Computer Science University of Maryland EMAIL
Pseudocode Yes Algorithm 1 The meta-learning framework
Open Source Code Yes A Py Torch implementation of adversarial querying can be found at: https://github.com/goldblum/Adversarial Querying
Open Datasets Yes We report performance on Omniglot, Mini-Image Net, and CIFAR-FS [16, 31, 3].
Dataset Splits No The paper describes the use of support and query data for meta-learning but does not provide specific percentages or counts for the overall train/validation/test dataset splits.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions 'A Py Torch implementation' but does not specify version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes A description of our training regime can be found in Appendix A.5. ... We consider perturbations with bound 8/255 and a step size of 2/255 as described by [19]. ... Layers are fine-tuned for 10 steps with a learning rate of 0.01.