Improved Network Robustness with Adversary Critic
Authors: Alexander Matyasko, Lap-Pui Chau
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the experiments, we show the effectiveness of our defense. Our method surpasses in terms of robustness networks trained with adversarial training. Additionally, we verify in the experiments with human annotators on MTurk that adversarial examples are indeed visually confusing. |
| Researcher Affiliation | Academia | Alexander Matyasko, Lap-Pui Chau School of Electrical and Electronic Engineering Nanyang Technological University, Singapore aliaksan001@ntu.edu.sg, elpchau@ntu.edu.sg |
| Pseudocode | Yes | Algorithm 1 High-Confidence Attack Af |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology. |
| Open Datasets | Yes | We perform experiments on MNIST dataset. |
| Dataset Splits | No | The paper mentions using a 'validation dataset' but does not provide specific details on the split percentages or sample counts for the training, validation, and test sets. For example: 'We set λ = 0.5 for fully-connected network and λ = 0.1 for Lenet-5 network which we selected using validation dataset.' |
| Hardware Specification | Yes | We thank NVIDIA Corporation for the donation of the Ge Force Titan X and Ge Force Titan X (Pascal) used in this research. |
| Software Dependencies | No | The paper mentions 'all experiments were conducted using Tensorflow [35]' but does not specify a version number for Tensorflow or any other software libraries or dependencies. |
| Experiment Setup | Yes | We train both networks using Adam optimizer [36] with batch size 100 for 100 epochs. [...] We set λ = 0.5 for fully-connected network and λ = 0.1 for Lenet-5 network which we selected using validation dataset. Both networks are trained with λrec = 10-2 for the adversarial cycle-consistency loss and λgrad = 10.0 for the gradient norm penalty. The number of iterations for our attack Af is set to 5. |