Improving Adversarial Robustness Through the Contrastive-Guided Diffusion Process
Authors: Yidong Ouyang, Liyan Xie, Guang Cheng
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our theoretical results using simulations and demonstrate the good performance of Contrastive-DP on image datasets. |
| Researcher Affiliation | Academia | 1School of Data Science, The Chinese University of Hong Kong, Shenzhen, China 2Department of Statistics, University of California, Los Angeles, USA. |
| Pseudocode | Yes | Algorithm 1 Generation in Contrastive-guided Diffusion Process (Contrastive-DP) |
| Open Source Code | No | The paper does not explicitly provide a link to open-source code or state that the code will be made publicly available for the described methodology. |
| Open Datasets | Yes | We test the contrastive-DP algorithm on three image datasets, the MNIST dataset (Le Cun et al., 1998), CIFAR10 dataset (Krizhevsky, 2009), and the Traffic Signs dataset (Houben et al., 2013). |
| Dataset Splits | No | The paper specifies training and testing sizes for datasets like MNIST ("60k...for training and 10k...for testing"), but it does not explicitly mention validation dataset splits. |
| Hardware Specification | Yes | Running on a RTX 4x2080Ti GPU cluster. |
| Software Dependencies | No | The paper mentions tools and frameworks (e.g., PyTorch, TRADES, WRN-28-10) but does not provide specific version numbers for any software components. |
| Experiment Setup | Yes | A detailed description of the pipeline for generating data and the corresponding hyperparameter can be found in Appendix D.3. |