Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Adversarial Training Should Be Cast as a Non-Zero-Sum Game

Authors: Alexander Robey, Fabian Latorre, George J. Pappas, Hamed Hassani, Volkan Cevher

ICLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the performance of BETA and BETA-AT on CIFAR-10 (Krizhevsky et al., 2009).
Researcher Affiliation Academia Alexander Robey University of Pennsylvania EMAIL Fabian Latorre LIONS, EPFL EMAIL George J. Pappas University of Pennsylvania EMAIL Hamed Hassani University of Pennsylvania EMAIL Volkan Cevher LIONS, EPFL EMAIL
Pseudocode Yes Algorithm 1: Best Targeted Attack (BETA)
Open Source Code No The paper does not provide any concrete access to source code, such as a specific repository link or an explicit code release statement.
Open Datasets Yes In this section, we evaluate the performance of BETA and BETA-AT on CIFAR-10 (Krizhevsky et al., 2009).
Dataset Splits Yes We report the performance of two different checkpoints for each algorithm: the best performing checkpoint chosen by early stopping on a held-out validation set, and the performance of the last checkpoint from training.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions optimizers such as Adam (Kingma & Ba, 2014) or RMSprop but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes We consider the standard perturbation budget of ϵ = 8/255, and all training and test-time attacks use a step size of α = 2/255. For both TRADES and MART, we set the trade-off parameter λ = 5, which is consistent with the original implementations (Wang et al., 2020; Zhang et al., 2019).