Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Adversarial Robustness via Deformable Convolution with Stochasticity
Authors: Yanxiang Ma, Zixuan Huang, Minjing Dong, Shan You, Chang Xu
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our method achieves SOTA adversarial robustness and clean accuracy compared with other random defense methods. Code is available here. ... We evaluate DCS on CIFAR dataset (Krizhevsky, 2012) and Imagenet dataset (Krizhevsky et al., 2017). Various convolution-based networks are used for baseline networks, including Res Net18 (Park et al., 2022), Res Net50 (He et al., 2016) and Wide Res Net34 (Zagoruyko & Komodakis, 2016). In our experiments, unless specifically labeled, DCS replaces the second convolutional layer. All other layers keep the original settings. ... Benchmarks We evaluate DCS under SOTA attacks in Torch Attacks (Kim, 2020). |
| Researcher Affiliation | Collaboration | 1School of Computer Science, University of Sydney, NSW, Austrilia 2International School, Beijing University of Posts and Telecommunications, Beijing, China 3School of Computer Science, City University of Hong Kong, Hong Kong, China 4Sense Time Research, Beijing, China. Correspondence to: Chang Xu <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Gradient-Selective Adversarial Training ... Algorithm 2 Adversarial Defense under White-Box Attack |
| Open Source Code | Yes | Extensive experiments show that our method achieves SOTA adversarial robustness and clean accuracy compared with other random defense methods. Code is available here. |
| Open Datasets | Yes | We evaluate DCS on CIFAR dataset (Krizhevsky, 2012) and Imagenet dataset (Krizhevsky et al., 2017). ... The CIFAR 10 and CIFAR 100 datasets contain 10 and 100 classes, respectively. ... The Image Net dataset consists of 1.2 x 10^6 training examples and 5.0 x 10^4 test examples, classified into 1000 distinct classes. |
| Dataset Splits | Yes | The CIFAR 10 and CIFAR 100 datasets contain 10 and 100 classes, respectively. Each dataset consists 5.0 x 10^4 training samples and 1.0 x 10^4 test samples, with all images resized to 32x32 pixels with three color channels. ... The Image Net dataset consists of 1.2 x 10^6 training examples and 5.0 x 10^4 test examples, classified into 1000 distinct classes. |
| Hardware Specification | Yes | The model is implemented using Py Torch (Paszke et al., 2019) and trained on an NVIDIA Ge Force RTX 4090 GPU. ... The entire model is developed using Py Torch (Paszke et al., 2019) and evaluated using four NVIDIA Ge Force RTX 4090 GPUs. |
| Software Dependencies | No | The model is implemented using Py Torch (Paszke et al., 2019) ... We evaluate DCS under SOTA attacks in Torch Attacks (Kim, 2020). ... The entire model is developed using Py Torch (Paszke et al., 2019). |
| Experiment Setup | Yes | In our experiments, we split the dataset into batches of 128, setting the weight decay at 5.0 x 10^-4. We employed an SGD optimizer with a momentum of 0.9. The initial learning rate was set at 0.1 and was decreased according to a multi-step schedule. The model was trained over 200 epochs, with learning rate reductions by a factor of 10 at epochs 60 and 120. For adversarial training, we configured ε at 8/255 and the step length η at 2/255 for a 7-step PGD (Madry et al., 2017). ... In our experiments, we divide the dataset into batches of 512 examples. The weight decay is 1.0 x 10^-4 with the initial learning rate set at 0.1 and managed using a cosine annealing scheduler. The model is fine-tuned using adversarial training for 90 epochs, with 10-step PGD (Madry et al., 2017) generating adversarial examples, setting ε to 4/255 and the step length η to 4/255. |