Robust Perception through Equivariance
Authors: Chengzhi Mao, Lingyu Zhang, Abhishek Vaibhav Joshi, Junfeng Yang, Hao Wang, Carl Vondrick
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical experiments show that restoring feature equivariance at inference time defends against worst-case adversarial perturbations. The method obtains improved adversarial robustness on four datasets (Image Net, Cityscapes, PASCAL VOC, and MS-COCO) on image recognition, semantic segmentation, and instance segmentation tasks. |
| Researcher Affiliation | Academia | 1Department of Computer Science, Columbia University, New York, USA 2Department of Computer Science, Rutgers University, New Jersey, USA. |
| Pseudocode | Yes | Algorithm 1 Equivariance Defense |
| Open Source Code | Yes | Our code is available at https: //github.com/cvlab-columbia/Equi4Rob. |
| Open Datasets | Yes | Our experiments evaluate the adversarial robustness on four datasets: Image Net (Deng et al., 2009), Cityscapes (Cordts et al., 2016), PASCAL-VOC (Everingham et al., 2010), and MS-COCO (Lin et al., 2014). |
| Dataset Splits | No | The paper mentions 'randomly sample 2% of data for evaluation' for ImageNet, but does not provide specific train/validation/test dataset splits (percentages or counts) for any of the datasets. |
| Hardware Specification | Yes | We evaluate on a single A6000 GPU. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | We choose the number of transformations to be K = 8, which empirically can be fit into a 2080Ti GPU with batch size 1. ... We use steps T = 20 for all our defense tasks. |