DISCO: Adversarial Defense with Local Implicit Functions

Authors: Chih-Hui Ho, Nuno Vasconcelos

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that both DISCO and its cascade version outperform prior defenses, regardless of whether the defense is known to the attacker. DISCO is also shown to be data and parameter efficient and to mount defenses that transfers across datasets, classifiers and attacks.
Researcher Affiliation Academia Chih-Hui Ho Nuno Vasconcelos Department of Electrical and Computer Engineering University of California, San Diego {chh279, nvasconcelos}@ucsd.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It describes the architecture and training procedures in text and diagrams.
Open Source Code Yes Code availabe at https://github.com/chihhuiho/disco.git
Open Datasets Yes Three datasets are considered: Cifar10 [58], Cifar100 [59] and Imagenet [23].
Dataset Splits No The paper mentions "training pairs" and "evaluation on the test set" but does not explicitly describe or specify a separate "validation" dataset split or how it was used.
Hardware Specification Yes All experiments are conducted on a single Nvidia Titan Xp GPU with Intel Xeon CPU E5-2630 using Pytorch [85].
Software Dependencies No The paper mentions "Pytorch [85]" but does not specify a version number for it or any other software dependency, which is required for reproducibility.
Experiment Setup Yes The network is then trained to minimize the L1 loss between the predicted RGB value and that of the clean xcln and defense output xdef. By default, the kernel size s is set to be 3. Random patches of size 48x48 are sampled from training pairs.