IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks

Authors: Yue Cao, Tianlin Li, Xiaofeng Cao, Ivor Tsang, Yang Liu, Qing Guo

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our method significantly enhances the adversarial robustness of diverse deep models against various attacks while maintaining high accuracy on clean images.
Researcher Affiliation Academia 1 CFAR and IHPC, Agency for Science, Technology and Research (A*STAR), Singapore 2 School of Computer Science and Engineering, Nanyang Technological University, Singapore 3 Jilin University, China
Pseudocode No No pseudocode or clearly labeled algorithm block was found in the paper.
Open Source Code Yes We released our code in https://github.com/tsingqguo/irad.
Open Datasets Yes We use three datasets: CIFAR10 (Krizhevsky et al., a), CIFAR100 (Krizhevsky et al., b) and Image Net (Deng et al., 2009).
Dataset Splits No The paper mentions using training and testing datasets, for example, 'We train the Wide Res Net2810 on CIFAR10 dataset and calculate the clean testing dataset s accuracy'. However, it does not explicitly provide specific percentages or counts for training/validation/test splits, nor does it explicitly mention a 'validation' split.
Hardware Specification Yes These experiments were conducted using the AMD EPYC 7763 64-Core Processor with 1 NVIDIA A100 GPUs.
Software Dependencies No The paper mentions using Adam optimizer and L1/cross-entropy loss functions, but it does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes The PGD attack uses an ϵ value of 8/255 and 100 steps, with a step size 2/255. For implicit representation training, we use Adam as the optimizer with a learning rate of 1e-4 and betas of (0, 0.9) as parameters... training is conducted with a batch size of 128. For Sample Net training... learning rate of 2e-4 for CIFAR10 and Image Net, 1e-3 for CIFAR100, and betas set to (0, 0.9). The training of Sample Net is conducted with a batch size of 400 for CIFAR10, 200 for CIFAR100, and 8 for Image Net.