Targeted Adversarial Perturbations for Monocular Depth Prediction

Authors: Alex Wong, Safa Cicek, Stefano Soatto

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments reveal vulnerabilities in monocular depth prediction networks, and shed light on the biases and context learned by them. To understand the effect of targeted perturbations, we conduct experiments on state-of-the-art monocular depth prediction methods.
Researcher Affiliation Academia Alex Wong UCLA Vision Lab alexw@cs.ucla.edu Safa Cicek UCLA Vision Lab safacicek@ucla.edu Stefano Soatto UCLA Vision Lab soatto@cs.ucla.edu
Pseudocode Yes Algorithm 1 Proposed method to calculate targeted adversarial perturbations for a regression task.
Open Source Code No The paper does not contain an explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes The depth models (Monodepth, Monodepth2) are trained on the KITTI dataset [12] using Eigen split [6]. We evaluate adversarial targeted attacks on KITTI semantic split [1].
Dataset Splits Yes The depth models (Monodepth, Monodepth2) are trained on the KITTI dataset [12] using Eigen split [6].
Hardware Specification Yes Entire optimization for each frame takes 12s (Monodepth2 takes 22ms for each forward pass and 11s 500 22ms in total) using a Ge Force GTX 1080.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9, and CUDA 11.1').
Experiment Setup Yes Images are resized to 640 x 192 as a preprocessing step and perturbations are computed with 500 steps of SGD. For all the experiments, ΞΎ {2 10 3, 5 10 3, 1 10 2, 2 10 2}.