Robustness Verification for Contrastive Learning
Authors: Zekai Wang, Weiwei Liu
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on various benchmark models and datasets verify our theoretical findings, and further demonstrate that our proposed RVCL is able to evaluate the robustness of both models and images. Our code is available at https: //github.com/wzekai99/RVCL. |
| Researcher Affiliation | Academia | 1School of Computer Science, Wuhan University, China. Correspondence to: Weiwei Liu <liuweiwei863@gmail.com>. |
| Pseudocode | Yes | The pseudocode is presented as PREDICT in Appendix C.2. ... The procedure is presented as CERTIFY. |
| Open Source Code | Yes | Our code is available at https: //github.com/wzekai99/RVCL. |
| Open Datasets | Yes | all CL encoders are trained on MNIST (Le Cun & Cortes, 2010) and CIFAR-10 (Krizhevsky & Hinton, 2009). |
| Dataset Splits | No | The paper states the total number of training and testing images for MNIST and CIFAR-10 (e.g., 60,000 training images and 10,000 testing images for MNIST), which are standard splits. However, it does not explicitly specify a separate validation dataset size or a defined methodology for creating a validation split. |
| Hardware Specification | Yes | Our experiments are conducted on a Ubuntu 64-Bit Linux workstation, having 10-core Intel Xeon Silver CPU (2.20 GHz) and Nvidia Ge Force RTX 2080 Ti GPUs with 11GB graphics memory. |
| Software Dependencies | No | The paper mentions using 'Adam (Kingma & Ba, 2015) optimizer' but does not provide specific version numbers for any software components (e.g., Python, PyTorch, CUDA). |
| Experiment Setup | Yes | We set the step size of instance-wise attack α = 0.007, the number of PGD maximize iteration as K = 10. ... we train the encoder with 500 epochs under Adam (Kingma & Ba, 2015) optimizer with the learning rate of 0.001. For the learning rate scheduling, the learning rate is dropped by a factor of 10 for every 100 epochs. The batch size in training is 256. |