Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
Authors: Haichao Zhang, Jianyu Wang
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct analysis on model robustness and demonstrate the effectiveness of the proposed approach through extensively experiments on different datasets compared with state-of-the-art approaches. |
| Researcher Affiliation | Industry | Haichao Zhang Jianyu Wang Horizon Robotics Baidu Research |
| Pseudocode | Yes | Algorithm 1 Feature Scattering-based Adversarial Training |
| Open Source Code | Yes | Code is available: https://github.com/Haichao-Zhang/FeatureScatter. |
| Open Datasets | Yes | We conduct extensive experiments across several benchmark datasets including CIFAR10 [31], CIFAR100 [31] and SVHN [42]. |
| Dataset Splits | No | The paper details training parameters and evaluation on test sets but does not explicitly mention a 'validation set' or a specific 'validation split' with percentages or sample counts. |
| Hardware Specification | No | The paper states 'Our implementation is based on Py Torch' but does not specify any hardware details such as GPU models, CPU types, or cloud computing resources used for the experiments. |
| Software Dependencies | No | The paper mentions 'Our implementation is based on Py Torch' and refers to algorithms like 'Sinkhorn algorithm [12]' but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | For training, the initial learning rate γ is 0.1 for CIFAR and 0.01 for SVHN. We set the number of epochs the Standard and Madry methods as 100 with transition epochs as {60, 90}. ... The training scheduling of 200 epochs similar to [61] with the same transition epochs used as we empirically observed it helps with the model performance... The perturbation budget of ϵ = 8 is used in training following literature [36]. Label smoothing of 0.5, attack iteration T=1 and Sinkhorn algorithm [12] with regularization of 0.01 is used. |