Smoothed Geometry for Robust Attribution

Authors: Zifan Wang, Haofan Wang, Shakul Ramkumar, Piotr Mardziel, Matt Fredrikson, Anupam Datta

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on a range of image models demonstrate that both of these mitigations consistently improve attribution robustness, and confirm the role that smooth geometry plays in these attacks on real, large-scale models.
Researcher Affiliation Academia Zifan Wang Electrical and Computer Engineering Carnegie Mellon University Haofan Wang Electrical and Computer Engineering Carnegie Mellon University Shakul Ramkumar Information Networking Institute Carnegie Mellon University Matt Fredrikson School of Computer Science Carnegie Mellon University Piotr Mardziel Electrical and Computer Engineering Carnegie Mellon University Anupam Datta Electrical and Computer Engineering Carnegie Mellon University
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Proofs for all theorems and propositions in this paper are included in Appendix A and the implementation is available on: https://github.com/ zifanw/smoothed_geometry
Open Datasets Yes In this section, we evaluate the performance of Attribution Attack on CIFAR-10 [21] and Flower [27] with Res Net-20 model and on Image Net [11] with pre-trained Res Net-50 model.
Dataset Splits No The paper mentions evaluating on specific numbers of images (e.g., 500 for CIFAR-10), but does not specify a full train/validation/test split for reproducibility. It primarily discusses evaluation on subsets for robustness testing.
Hardware Specification Yes We maintain the same training accuracies and record the per-epoch time with batch size of 32 on one NVIDIA Titan V.
Software Dependencies No The paper mentions software like Pytorch's Captum [20] API and cleverhans [31] implementation, but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes Setup. We optimize the Eq. (1) with maximum allowed perturbation 1 = 2, 4, 8, 16 for 500 images for CIFAR-10 and Flower and 1000 images for Image Net. ... SSR: we use the scaling coefficient s = 1e6 and the penalty β = 0.3. ... Madry s: we use cleverhans [31] implementation with PGD perturbation δ2 = 0.25 in the 2 space and the number of PGD iterations equals to 30. IG-NORM: We use the author s release code with default penalty level γ = 0.1. ... with batch size of 32