Adversarial Robustness through Disentangled Representations

Authors: Shuo Yang, Tianyu Guo, Yunhe Wang, Chang Xu3145-3153

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Theoretical analysis guarantees the mitigation of the trade-off between robustness and accuracy with good disentanglement and alignment performance. Experimental results on benchmark datasets finally demonstrate the empirical superiority of our method.
Researcher Affiliation Collaboration 1School of Computer Science, Faculty of Engineering, The University of Sydney, Australia 2Key Laboratory of Machine Perception (MOE), CMIC, School of EECS, Peking University, China 3Huawei Noah s Ark Lab
Pseudocode Yes Algorithm 1: Training Process.
Open Source Code Yes We use Pytorch3 to implement our model and the code can be found here4: https://github.com/AAAI2021DRRDN/DRRDN.git
Open Datasets Yes In this paper, we use MNIST 1 and CIFAR10 2 which are benchmark datasets for the evaluation of defense methods.
Dataset Splits Yes We keep the default training/testing set splits in practice.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments.
Software Dependencies No The paper mentions "Pytorch3" but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes As for the optimization, we use the SGD optimizer with an initial learning rate of 1e 2 and 1e 1 for CIFAR10. The number of total training epoch is 100 during which a learning rate decay of 0.1 is imposed at 55, 75, 90 epochs, respectively. Also, the searching for the optimal hyper-parameters is included in the supplementary materials.