Geometry-aware Instance-reweighted Adversarial Training
Authors: Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, Mohan Kankanhalli
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we empirically justify the efficacy of GAIRAT. |
| Researcher Affiliation | Academia | 1RIKEN Center for Advanced Intelligence Project, Tokyo, Japan 2National University of Singapore, Singapore 3Hong Kong Baptist University, Hong Kong SAR, China 4The University of Tokyo, Tokyo, Japan |
| Pseudocode | Yes | Algorithm 1 Geometry-aware projected gradient descent (GA-PGD), Algorithm 2 Geometry-aware instance-dependent adversarial training (GAIRAT) |
| Open Source Code | No | The paper does not contain an unambiguous statement where the authors state they are releasing the code for the work described in this paper, nor does it provide a direct link to a source-code repository for their implementation. Footnotes refer to external GitHub repositories for baselines or data used in comparisons (e.g., FAT’s GitHub, DAT’s GitHub, RST’s GitHub, TRADES’s GitHub, AA’s GitHub). |
| Open Datasets | Yes | All images of CIFAR-10 (Krizhevsky, 2009) and SVHN (Netzer et al., 2011) are normalized into [0, 1]. |
| Dataset Splits | Yes | In Table 1, the best checkpoint is chosen among the model checkpoints at Epochs 59-100 (selected based on the robust accuracy on PGD-20 test data). In practice, we can use a hold-out validation set to determine the best checkpoint, since (Rice et al., 2020) found the validation curve over epochs matches the test curves over epochs. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running its experiments. It only mentions general computing environments like training "Res Net-18" models. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., PyTorch 1.9, Python 3.8) needed to replicate the experiment. |
| Experiment Setup | Yes | We train Res Net-18 using SGD with 0.9 momentum for 100 epochs with the initial learning rate of 0.1 divided by 10 at Epoch 30 and 60, respectively. The weight decay is 0.0005. |