LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition

Authors: Valeriia Cherepanova, Micah Goldblum, Harrison Foley, Shiyuan Duan, John P Dickerson, Gavin Taylor, Tom Goldstein

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We develop our own adversarial filter that accounts for the entire image processing pipeline and is demonstrably effective against industrial-grade pipelines that include face detection and large scale databases. Additionally, we release an easy-to-use webtool that significantly degrades the accuracy of Amazon Rekognition and the Microsoft Azure Face Recognition API, reducing the accuracy of each to below 1%.
Researcher Affiliation Collaboration Valeriia Cherepanova Department of Mathematics University of Maryland vcherepa@umd.edu Micah Goldblum Department of Computer Science University of Maryland goldblum@umd.edu Harrison Foley Department of Computer Science US Naval Academy m211926@usna.edu Shiyuan Duan Department of Computer Science University of Maryland sduan1@umd.edu John Dickerson Department of Computer Science University of Maryland john@cs.umd.edu Gavin Taylor Department of Computer Science US Naval Academy taylor@usna.edu Tom Goldstein Department of Computer Science University of Maryland tomg@cs.umd.edu
Pseudocode No The paper describes the optimization problem mathematically and outlines the iterative solution using signed gradient ascent, but does not provide structured pseudocode or algorithm blocks labeled as such.
Open Source Code Yes We develop a tool, Low Key, for protecting users from unauthorized surveillance by leveraging methods from the adversarial attack literature, and make it available to the public as a webtool. [...] Our webtool can be found at lowkey.umiacs.umd.edu.
Open Datasets Yes Our ensemble of models contains Arc Face and Cos Face facial recognition systems (Deng et al., 2019; Wang et al., 2018). For each of these systems, we train Res Net-50, Res Net-152, IR-50, and IR-152 backbones on the MS-Celeb-1M dataset, which contains over five million images from over 85,000 identities (He et al., 2016; Deng et al., 2019; Guo et al., 2016). We primarily test our attacks on the Face Scrub dataset, a standard identification benchmark from the Mega Face challenge, which contains over 100,000 images from 530 known identities as well as one million distractor images (Kemelmacher-Shlizerman et al., 2016). We also perform experiments on the UMDFaces dataset, which can be found in Appendix 8.3 (Bansal et al., 2017).
Dataset Splits Yes We treat one tenth of each identity s images as probe images, and we insert the remaining images into the gallery.
Hardware Specification Yes We compare run-time to Fawkes as a baseline and test both attacks on a single NVIDIA Ge Force RTX 2080 TI GPU.
Software Dependencies No The paper states: 'For face detection and aligning models as well as for training routines, we use the face.evo LVe.Py Torch github repository (Zhao, 2020).' However, it does not provide specific version numbers for this or any other software components (e.g., PyTorch, CUDA, Python libraries) that would be needed to replicate the experiment environment.
Experiment Setup Yes For our adversarial attacks, we use 0.05 for the perceptual similarity penalty, σ = 3 and window size 7 for the Gaussian smoothing term. Attacks are computed using signed SGD for 50 epochs with a learning rate of 0.0025.