Uniformly Stable Algorithms for Adversarial Training and Beyond
Authors: Jiancong Xiao, Jiawei Zhang, Zhi-Quan Luo, Asuman E. Ozdaglar
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In practical scenarios, we demonstrate the efficacy of ME-A in mitigating the issue of robust overfitting. ... 6. Experiments |
| Researcher Affiliation | Academia | 1University of Pennsylvania, PA, USA; 2Massachusetts Institute of Technology, MA, USA; 3The Chinese University of Hong Kong, Shenzhen, China. |
| Pseudocode | Yes | Algorithm 1 Moreau Envelope-A |
| Open Source Code | Yes | 2Code is publicly available at https://github.com/Jiancong Xiao/Moreau-Envelope-SGD. |
| Open Datasets | Yes | It can be observed in experiments on common datasets such as SVHN, CIFAR-10/100. ... Carmon et al. (2019) |
| Dataset Splits | No | The paper mentions using CIFAR-10, SVHN, and CIFAR-100 datasets and discusses training and test accuracy, but it does not explicitly provide the training/validation/test splits (e.g., percentages or sample counts) or state that standard splits were used. |
| Hardware Specification | No | The paper does not explicitly state any specific hardware details such as GPU models, CPU types, or memory used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers used for replicating the experiments. |
| Experiment Setup | Yes | Weight decay is set to be 5 × 10−4. Based on Theorem 4.7, the step size αt of updating u is set to be 1/pt, then τt = (t − 1)/t. ... For the attack algorithms, we use ϵ = 8/255. The attack step size is set to be ϵ/4. We use piece-wise learning rates, which are equal to 0.1, 0.01, 0.001 for epochs 1 to 100, 101 to 150, and 151 to 200, respectively. |