A Study of Face Obfuscation in ImageNet
Authors: Kaiyu Yang, Jacqueline H. Yau, Li Fei-Fei, Jia Deng, Olga Russakovsky
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Concretely, we benchmark multiple deep neural networks on obfuscated images and observe that the overall recognition accuracy drops only slightly ( 1.0%). Further, we experiment with transfer learning to 4 downstream tasks... Our work demonstrates the feasibility of privacy-aware visual recognition... |
| Researcher Affiliation | Academia | 1Department of Computer Science, Princeton University 2Department of Computer Science, Stanford University. |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. The paper describes methods in prose and with mathematical formulas (e.g., in Appendix B for Face Blurring Method). |
| Open Source Code | Yes | Data and code are available at https: //github.com/princetonvisualai/ imagenet-face-obfuscation. |
| Open Datasets | Yes | Taking the popular Image Net dataset (Deng et al., 2009) as an example... and object recognition on CIFAR-10 (Krizhevsky et al., 2009), scene recognition on SUN (Xiao et al., 2010), object detection on PASCAL VOC (Everingham et al., 2010), and face attribute classification on Celeb A (Liu et al., 2015b). |
| Dataset Splits | Yes | We train with SGD for 90 epochs... We verify that validation accuracy drops only slightly (0.1% 0.7% for blurring, 0.3% 1.0% for overlaying) when using face-obfuscated images to train and evaluate. |
| Hardware Specification | Yes | Each experiment takes 1 7 days on machines with 2 CPUs, 16GB memory, and 1 6 Nvidia GTX GPUs. |
| Software Dependencies | No | The paper mentions software like 'Py Torch models (Paszke et al., 2019)' and 'MMDetection (Chen et al., 2019a)' but does not provide specific version numbers for these or other key software components. |
| Experiment Setup | Yes | All models are trained with a batch size of 256, a momentum of 0.9, and a weight decay of 10 4. We train with SGD for 90 epochs, dropping the learning rate by a factor of 10 every 30 epochs. The initial learning rate is 0.01 for Alex Net, Squeeze Net, and VGG; 0.1 for other models. |