Towards Optimal Randomized Strategies in Adversarial Example Game
Authors: Jiahao Xie, Chao Zhang, Weijie Liu, Wensong Bai, Hui Qian
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results also demonstrate the efficiency of FRAT on CIFAR-10 and CIFAR-100 datasets. |
| Researcher Affiliation | Academia | Jiahao Xie1, Chao Zhang*2, Weijie Liu3,1, Wensong Bai1,2, Hui Qian1,4 1College of Computer Science and Technology, Zhejiang University 2Advanced Technology Institute, Zhejiang University 3Qiushi Academy for Advanced Studies, Zhejiang University 4State Key Lab of CAD&CG, Zhejiang University xiejh@zju.edu.cn, zczju@zju.edu.cn, westonhunter@zju.edu.cn, wensongb@zju.edu.cn, qianhui@zju.edu.cn |
| Pseudocode | Yes | Algorithm 1: Fully Randomized Adversarial Training. |
| Open Source Code | Yes | Source code: https://github.com/xjiajiahao/fully-randomized-adversarial-training |
| Open Datasets | Yes | Experimental results also demonstrate the efficiency of FRAT on CIFAR-10 and CIFAR-100 datasets. |
| Dataset Splits | No | The paper describes how training and test datasets are generated for synthetic data, but does not explicitly provide validation dataset splits. For real data (CIFAR-10 and CIFAR-100), it states: "The detailed setting is deferred to the long version of this paper.", thus not providing explicit split information in the main text. |
| Hardware Specification | No | The paper only provides runtime metrics (e.g., "The average runtime per iteration of SAT is 0.72 s on CIFAR-10"), but does not specify any exact hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for the experiments. |
| Software Dependencies | No | The paper mentions software components like "PGD attack" and "Auto PGD" but does not specify their version numbers or any other software dependencies with version details. |
| Experiment Setup | Yes | The regularization parameter in both FRAT and the regularized algorithm in (Meunier et al. 2021) is set to 0.01. ... In the implementation of FRAT, we use the PLA algorithm described previously as the sampling subroutine, and the expectation in (12) is estimated by drawing 100 models from {µ(0), . . . , µ(t)} when t ≤ 100. ... For FRAT, we implement the sampling subroutine with 10 steps of PLA (12), where the noise level γ is set to 0.0001 and the expectation over µ(t) is approximated with the sliding window trick described previously. We set the size of the sliding window to 1. |