ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints

Authors: Yinpeng Dong, Shouwei Ruan, Hang Su, Caixin Kang, Xingxing Wei, Jun Zhu

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to evaluate the viewpoint robustness of image classifiers on the Image Net [45] dataset. Our results demonstrate that View Fool can effectively generate a distribution of adversarial viewpoints against the common image classifiers, which also exhibit high transferability across different models.
Researcher Affiliation Collaboration 1 Dept. of Comp. Sci. and Tech., Institute for AI, Tsinghua-Bosch Joint ML Center, THBI Lab, BNRist Center, Tsinghua University, Beijing 100084, China 2 Institute of Artificial Intelligence, Beihang University, Beijing 100191, China 3 Real AI 4 Peng Cheng Laboratory 5 Pazhou Laboratory (Huangpu), Guangzhou, China
Pseudocode No The paper describes its optimization algorithm in Section 3.3 using mathematical equations and textual explanations, but it does not present it as a structured pseudocode or algorithm block.
Open Source Code Yes The code to reproduce the experimental results is publicly available at https://github.com/Heathcliff-saku/View Fool_.
Open Datasets Yes We consider visual recognition models on Image Net [45] in the experiments. ... Moreover, we introduce a new OOD dataset called Image Net-V to benchmark viewpoint robustness.
Dataset Splits No The paper mentions evaluating models on ImageNet and ImageNet-V, but it does not explicitly specify the proportions or sizes of training, validation, or test splits used for its experiments.
Hardware Specification No The provided paper text does not contain specific details about the hardware used for experiments, such as exact GPU/CPU models or memory specifications. While the ethics checklist points to Appendix C.1 for this information, Appendix C.1 itself is not included in the provided text.
Software Dependencies No The paper mentions tools like "COLMAP [48]" and the "Adam optimizer [28]" but does not specify version numbers for these or any other software dependencies.
Experiment Setup Yes In View Fool, we initialize the camera at [0, 4, 0] as shown in Fig. 2. We set the range of rotation angles as 2 [ 180 , 180 ], 2 [ 30 , 30 ], φ 2 [20 , 160 ], and the range of translation distances as x 2 [ 0.5, 0.5], y 2 [ 1, 1], z 2 [ 0.5, 0.5]. ... We set λ = 0.01 in the experiments ... We approximate the gradients in Eq. (6) with k = 50 MC samples and adopt the Adam optimizer [28] to update the distribution parameters (µ, σ) for 100 iterations.