Class-Disentanglement and Applications in Adversarial Detection and Defense
Authors: Kaiwen Yang, Tianyi Zhou, Yonggang Zhang, Xinmei Tian, Dacheng Tao
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In experiments, this simple approach substantially improves the detection and defense against different types of adversarial attacks. |
| Researcher Affiliation | Collaboration | University of Science and Technology of China1; University of Washington, Seattle2 University of Maryland, College Park3; JD Explore Academy4 |
| Pseudocode | No | The paper includes an architecture diagram (Figure 1) and mathematical equations for the objective function and attacks, but no structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available: https://github.com/kai-wen-yang/CD-VAE. |
| Open Datasets | Yes | all the experiments are conducted on two datasets, CIFAR-10 [26] and restricted Image Net [44]. |
| Dataset Splits | No | The paper provides training and test set sizes ('50,000 training images and 10000 test images' for CIFAR-10, and '257,748 training images and 10,150 test images' for restricted Image Net) but does not specify details for a separate validation split. |
| Hardware Specification | No | The paper mentions 'GPU cluster built by MCC Lab of Information Science and Technology Institution, USTC' but does not provide specific GPU models, CPU details, or other hardware specifications. |
| Software Dependencies | No | The paper mentions 'Adam W' as the optimizer but does not specify versions for software libraries, frameworks, or other dependencies like Python, PyTorch, or TensorFlow. |
| Experiment Setup | Yes | For CIFAR-10, we use a VAE G( ) with a few convolutional layers [25] and Wide Res Net-28-10 [48] as the image classifier D( ). For restricted Image Net, due to the high resolution (299 299), we use VQ-VAE [38] as G( ) and Res Net-50 [19] as D( ). We set β = 0.2. During training, we use Adam W [12] with a weight decay of 1e-6 as the optimizer to minimize Eq. (1) in an end-to-end manner for 300(60) epochs on CIFAR-10(restricted Image Net). |