Metric Learning for Adversarial Robustness

Authors: Chengzhi Mao, Ziyuan Zhong, Junfeng Yang, Carl Vondrick, Baishakhi Ray

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct an empirical analysis of deep representations under the state-of-the-art attack method called PGD, and find that the attack causes the internal representation to shift closer to the false class. Quantitative experiments show improvement of robustness accuracy by up to 4% and detection efficiency by up to 6% according to Area Under Curve score over prior work.
Researcher Affiliation Academia Chengzhi Mao Columbia University cm3797@columbia.edu Ziyuan Zhong Columbia University ziyuan.zhong@columbia.edu Junfeng Yang Columbia University junfeng@cs.columbia.edu Carl Vondrick Columbia University vondrick@cs.columbia.edu Baishakhi Ray Columbia University rayb@cs.columbia.edu
Pseudocode Yes The details of the algorithm are introduced in the appendix.
Open Source Code Yes The code of our work is available at https: //github.com/columbia/Metric_Learning_Adversarial_Robustness.
Open Datasets Yes We validate our method on different model architectures across three popular datasets: MNIST, CIFAR-10, and Tiny-Image Net.
Dataset Splits No MNIST consists of a training set of 55,000 images (excluding the 5000 images for validation as in [19]) and a testing set of 10,000 images. Tiny Imagenet is a tiny version of Image Net consisting of color images with size 64 64 3 belonging to 200 classes. Each class has 500 training images and 50 validation images. However, for CIFAR-10, no explicit validation split is stated.
Hardware Specification Yes We conduct all of our experiments using Tensor Flow v1.13 [1] on a single Tesla V100 GPU with a memory of 16GB.
Software Dependencies Yes We conduct all of our experiments using Tensor Flow v1.13 [1] on a single Tesla V100 GPU with a memory of 16GB.
Experiment Setup Yes The details of network architectures and hyper-parameters are summarized in the appendix. All the other implementation details are discussed in the appendix.