Attacking Binarized Neural Networks

Authors: Angus Galloway, Graham W. Taylor, Medhat Moussa

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To the best of our knowledge, we are the first to formally evaluate and interpret the robustness of binary neural networks (BNNs) to adversarial attacks on the MNIST (Le Cun & Cortes, 1998) and CIFAR-10 (Krizhevsky, 2009) datasets. We report test error rates for these models on MNIST (Le Cun & Cortes, 1998) with varying capacity in Table 6 of Appendix A. All experiments were conducted in Tensor Flow, and used either v2.0.0 of Clever Hans (Papernot et al., 2017a), or Foolbox, a Python toolbox for creating adversarial examples (Rauber et al., 2017).
Researcher Affiliation Academia Angus Galloway1, Graham W. Taylor1,2,3 Medhat Moussa1 1School of Engineering, University of Guelph, Canada 2Canadian Institute for Advanced Research 3Vector Institute for Artificial Intelligence, Canada
Pseudocode No The paper describes methods and procedures in narrative text and figures (e.g., Figure 1 for quantization scheme), but it does not include a clearly labeled "Pseudocode" or "Algorithm" block.
Open Source Code Yes Source code available at https://github.com/Angus G/cleverhans-attacking-bnns
Open Datasets Yes To the best of our knowledge, we are the first to formally evaluate and interpret the robustness of binary neural networks (BNNs) to adversarial attacks on the MNIST (Le Cun & Cortes, 1998) and CIFAR-10 (Krizhevsky, 2009) datasets.
Dataset Splits No The paper frequently mentions using a "test set" (e.g., "Accuracy on adversarial examples generated with a FGSM misclassification attack on the MNIST test set") and discusses training epochs, but it does not explicitly define or specify a distinct validation dataset split or strategy.
Hardware Specification No The paper mentions running experiments "with Tensor Flow (Abadi et al., 2015) on a modern GPU" and acknowledges "hardware support from NVIDIA and Compute Canada", but it does not provide specific details such as GPU model numbers, CPU types, or memory specifications.
Software Dependencies Yes All experiments were conducted in Tensor Flow, and used either v2.0.0 of Clever Hans (Papernot et al., 2017a), or Foolbox, a Python toolbox for creating adversarial examples (Rauber et al., 2017).
Experiment Setup Yes Models were trained for 15 epochs unless indicated otherwise. All models were trained with Adam for 15 epochs with a batch size of 128 and a learning rate of 1e-3. For adversarially trained models, we used 20 iterations of PGD (Madry et al., 2017) with ϵ = 0.3 for the last 5 epochs.