Adversarial Training Should Be Cast as a Non-Zero-Sum Game

Authors: Alexander Robey, Fabian Latorre, George J. Pappas, Hamed Hassani, Volkan Cevher

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the performance of BETA and BETA-AT on CIFAR-10 (Krizhevsky et al., 2009).
Researcher Affiliation Academia Alexander Robey University of Pennsylvania arobey1@upenn.edu Fabian Latorre LIONS, EPFL fabian.latorre@epfl.ch George J. Pappas University of Pennsylvania pappasg@upenn.edu Hamed Hassani University of Pennsylvania hassani@upenn.edu Volkan Cevher LIONS, EPFL volkan.cevher@epfl.ch
Pseudocode Yes Algorithm 1: Best Targeted Attack (BETA)
Open Source Code No The paper does not provide any concrete access to source code, such as a specific repository link or an explicit code release statement.
Open Datasets Yes In this section, we evaluate the performance of BETA and BETA-AT on CIFAR-10 (Krizhevsky et al., 2009).
Dataset Splits Yes We report the performance of two different checkpoints for each algorithm: the best performing checkpoint chosen by early stopping on a held-out validation set, and the performance of the last checkpoint from training.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions optimizers such as Adam (Kingma & Ba, 2014) or RMSprop but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes We consider the standard perturbation budget of ϵ = 8/255, and all training and test-time attacks use a step size of α = 2/255. For both TRADES and MART, we set the trade-off parameter λ = 5, which is consistent with the original implementations (Wang et al., 2020; Zhang et al., 2019).