Rethinking CNN’s Generalization to Backdoor Attack from Frequency Domain
Authors: Quanrui Rao, Lin Wang, Wuying Liu
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted experiments on three widely used datasets: CIFAR-10 (Krizhevsky et al. (2009)), Celeba (Liu et al. (2015)) and MNIST (Le Cun et al. (1998)). |
| Researcher Affiliation | Academia | 1 Shandong Key Laboratory of Language Resources Development and Application, Ludong University, China 2 School of Information and Electrical Engineering, Ludong University, China 3 Xianda College of Economics and Humanities, Shanghai International Studies University, China |
| Pseudocode | No | The paper describes its proposed methods and algorithms in textual form and through equations, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code for the described methodology, nor does it include links to a code repository. |
| Open Datasets | Yes | We conducted experiments on three widely used datasets: CIFAR-10 (Krizhevsky et al. (2009)), Celeba (Liu et al. (2015)) and MNIST (Le Cun et al. (1998)). |
| Dataset Splits | No | The paper mentions conducting "validation using the Res Net18...models" but does not provide specific details on the dataset splits for training, validation, or testing for CIFAR-10, Celeba, or MNIST. While it provides train/test splits for Tiny-ImageNet, it does not explicitly mention a validation split for that dataset either. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware (e.g., GPU models, CPU types, memory) used to conduct the experiments. |
| Software Dependencies | No | The paper mentions using the "adam optimizer" and various models like "Res Net18", but it does not provide specific version numbers for any software libraries, frameworks, or programming languages used (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | During the training phase, we used the adam optimizer, initially using a learning rate of 0.01 and decreasing it by a factor of 10 every 100 training steps. |