Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
Authors: Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, Neil Zhenqiang Gong
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically evaluate our method on CIFAR10 and Image Net. For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69.2% on Image Net when an attacker can arbitrarily perturb 5 pixels of a testing image. |
| Researcher Affiliation | Academia | Jinyuan Jia Duke University jinyuan.jia@duke.edu Binghui Wang Illinois Institute of Technology bwang70@iit.edu Xiaoyu Cao Duke University xiaoyu.cao@duke.edu Hongbin Liu Duke University hongbin.liu@duke.edu Neil Zhenqiang Gong Duke University neil.gong@duke.edu |
| Pseudocode | No | The paper describes the method and theoretical derivations, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states: 'We use the publicly available implementation1 of randomized ablation to train our models.' and provides a footnote link to 'https://github.com/alevine0/randomizedAblation/'. This link is for a third-party implementation that the authors used, not for the specific methodology or experimental code developed in this paper (e.g., for top-k predictions and tightness analysis). |
| Open Datasets | Yes | We use CIFAR10 (Krizhevsky et al., 2009) and Image Net (Deng et al., 2009) for evaluation. |
| Dataset Splits | No | The paper states: 'Moreover, as in Lee et al. (2019), we use 500 testing examples for both CIFAR10 and Image Net.' This specifies a subset for testing, but it does not provide details on the training or validation splits (e.g., percentages, specific sample counts for each split, or a reference to a predefined split methodology). |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models (e.g., 'NVIDIA A100'), CPU models (e.g., 'Intel Core i7'), or cloud computing instance types with their specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Res Net-110 and Rest Net-50 as the base classifiers' and refers to a 'publicly available implementation of randomized ablation' but does not specify any software dependencies with version numbers (e.g., PyTorch 1.9, Python 3.8). |
| Experiment Setup | Yes | Parameter setting: Unless otherwise mentioned, we adopt the following default parameters. We set e = 50 and e = 1, 000 for CIFAR10 and Image Net, respectively. We set k = 3, n = 100, 000, and α = 0.001. |