On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning
Authors: Ren Wang, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Chuang Gan, Meng Wang
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, extensive experiments are conducted to demonstrate the effectiveness of our proposed methods in robust few-shot learning. Codes are available at https://github.com/wangren09/MetaAdv. |
| Researcher Affiliation | Collaboration | Ren Wang1,4 Kaidi Xu2 Sijia Liu3,5 Pin-Yu Chen3 Tsui-Wei Weng3 Chuang Gan3 1Rensselaer Polytechnic Institute, USA 2Northeastern University, USA 3MIT-IBM Watson AI Lab, IBM Research, USA 4University of Michigan, USA 5Michigan State University, USA |
| Pseudocode | Yes | Algorithm S1 R-MAMLout |
| Open Source Code | Yes | Codes are available at https://github.com/wangren09/MetaAdv. |
| Open Datasets | Yes | To test the effectiveness of our methods, we employ the Mini Image Net dataset Vinyals et al. (2016), which is the benchmark for few-shot learning. |
| Dataset Splits | Yes | Mini Image Net contains 100 classes with 600 samples in each class. We use the training set with 64 classes and test set with 20 classes. ... For the meta-update, we use 15 validation images for each class. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper does not specify the versions of software dependencies, libraries, or frameworks used for the implementation or experiments. |
| Experiment Setup | Yes | By default, we set the training attack strength ϵ = 2, γCL = 0.1, and set γout = 5 (TRADES), γout = 0.2 (AT) via a grid search. During meta-testing, a 10-step PGD attack with attack strength ϵ = 2 is used to evaluate RA of the learnt meta-model over 2400 few-shot test tasks. ... We set the gradient step size in the fine-tuning as α = 0.01, and the gradient step sizes in the meta-update as β1 = 0.001, β2 = 0.001 for clean validation data and adversarial validation data, respectively. |