A Unified Wasserstein Distributional Robustness Framework for Adversarial Training
Authors: Anh Tuan Bui, Trung Le, Quan Hung Tran, He Zhao, Dinh Phung
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 EXPERIMENTS We use MNIST (Le Cun et al., 1998), CIFAR10 and CIFAR100 (Krizhevsky et al., 2009) as the benchmark datasets in our experiment. |
| Researcher Affiliation | Collaboration | 1Monash University 2Adobe Research 3Vin AI Research |
| Pseudocode | Yes | Algorithm 1 The pseudocode of our proposed method. |
| Open Source Code | Yes | 1Our code is available at https://github.com/tuananhbui89/Unified-Distributional-Robustness |
| Open Datasets | Yes | We use MNIST (Le Cun et al., 1998), CIFAR10 and CIFAR100 (Krizhevsky et al., 2009) as the benchmark datasets in our experiment. |
| Dataset Splits | No | The paper mentions using the full test set and 1000 test samples for different attacks, but does not explicitly provide information about training/validation/test splits or a dedicated validation set. |
| Hardware Specification | No | The paper describes the CNN and ResNet architectures used for experiments but does not provide any specific hardware details such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions optimizers (SGD, Adam) and refers to an external PyTorch implementation for attacks, but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) used for its own experimental setup. |
| Experiment Setup | Yes | For all the AT methods, we use {k = 40, ϵ = 0.3, η = 0.01} for the MNIST dataset, {k = 10, ϵ = 8/255, η = 2/255} for the CIFAR10 dataset and {k = 10, ϵ = 0.01, η = 0.001} for the CIFAR100 dataset, where k is number of iteration, ϵ is the distortion bound and η is the step size of the adversaries. |