Adversarial Training of Self-supervised Monocular Depth Estimation against Physical-World Attacks

Authors: Zhiyuan Cheng, James Chenhao Liang, Guanhong Tao, Dongfang Liu, Xiangyu Zhang

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the performance of our method in white-box, black-box, and physicalworld attack scenarios, and discuss the ablations. Our code is available at https://github. com/Bob-cheng/Depth Model Hardening. Table 1: Benign performance of original and hardened models on depth estimation.
Researcher Affiliation Academia Zhiyuan Cheng Purdue University cheng443@purdue.edu James Liang Rochester Institute of Technology jcl3689@rit.edu Guanhong Tao Purdue University taog@purdue.edu Dongfang Liu Rochester Institute of Technology dongfang.liu@rit.edu Xiangyu Zhang Purdue University xyzhang@cs.purdue.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github. com/Bob-cheng/Depth Model Hardening.
Open Datasets Yes Both models are trained on the KITTI dataset (Geiger et al., 2013) and our methods fine-tune the original models publicly available.
Dataset Splits Yes We evaluate the depth estimation performance on the KITTI dataset using the Eigen split and report the results in Table 1.
Hardware Specification Yes We train our model with one GPU (Nvidia RTX A6000) that has a memory of 48G and the CPU is Intel Xeon Silver 4214R.
Software Dependencies No The paper mentions "Adam as the optimizer" but does not specify version numbers for any software dependencies.
Experiment Setup Yes In adversarial training, the ranges of distance zc and viewing angle α are sampled randomly from 5 to 10 meters and -30 to 30 degrees, respectively... We generate the adversarial perturbations with two methods: L0-norm-bounded with ϵ = 1/10 and L∞-norm-bounded (i.e., PGD (Madry et al., 2018)) with ϵ = 0.1... We finetune the original model for 3 epochs on the KITTI dataset... In perturbation generation, we use 10 steps and a step size of 2.5 ϵ/10... and a batch size of 12. In MDE training, the batch size is 32, and the learning rate is 1e-5. We use Adam as the optimizer...