CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection
Authors: Hanshu Yan, Jingfeng Zhang, Gang Niu, Jiashi Feng, Vincent Tan, Masashi Sugiyama
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on benchmark datasets including CIFAR10 and SVHN clearly verify the hypothesis and CIFS s effectiveness of robustifying CNNs. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, National University of Singapore, Singapore; 2RIKEN Center for Advanced Intelligence Project (AIP), Tokyo, Japan; 3Department of Mathematics, National University of Singapore, Singapore; 4Graduate School of Frontier Sciences, The University of Tokyo, Tokyo, Japan. |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology. |
| Open Datasets | Yes | We utilize the CIFS to modify CNNs in different architectures to perform classification tasks on benchmark datasets, namely a Res Net-18 and a Wide Res Net-28-10 on the CIFAR10 (Krizhevsky, 2009) dataset, a Res Net-18 on the SVHN (Netzer et al., 2011) dataset and a Res Net-10 on the Fashion-MNIST (Xiao et al., 2017) dataset. |
| Dataset Splits | No | The paper mentions using benchmark datasets like CIFAR10 and SVHN, which typically have predefined splits, but it does not explicitly state the percentages or counts for training, validation, and test splits within the paper text itself. |
| Hardware Specification | No | The paper discusses training and evaluation times but does not specify any hardware details such as GPU/CPU models or types of computing resources used for experiments. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers for replication, such as programming languages, libraries, or frameworks used. |
| Experiment Setup | Yes | We adversarially train Res Net-18 and WRN-28-10 models with PGD-10 (ϵ = 8/255) adversarial data. [...] Channels relevances are assessed based on top-2 results and we use the softmax function with T = 1 as the IMGF. [...] We train CNN classifiers in an adversarial manner for 120 epochs and adjust the learning rate with a multiplier 0.1 at epoch 75 and epoch 90. |