Robust Pre-Training by Adversarial Contrastive Learning

Authors: Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically evaluate the proposed Adversarial Contrastive Learning (ACL) and show it can consistently outperform existing methods. For example on the CIFAR-10 dataset, ACL outperforms the previous state-of-the-art unsupervised robust pre-training approach [1] by 2.99% on robust accuracy and 2.14% on standard accuracy.
Researcher Affiliation Collaboration Ziyu Jiang1, Tianlong Chen2, Ting Chen3, Zhangyang Wang2 1Texas A&M University, 2University of Texas at Austin, 3Google Research, Brain Team jiangziyu@tamu.edu, {tianlong.chen,atlaswang}@utexas.edu, iamtingchen@google.com
Pseudocode Yes Algorithm 1: Algorithm of Dual Stream (DS) Pretraining
Open Source Code Yes Our codes and pre-trained models have been released at: https: //github.com/VITA-Group/Adversarial-Contrastive-Learning.
Open Datasets Yes We evaluate three datasets: CIFAR-10, CIFAR-10-C [31], CIFAR-100.
Dataset Splits Yes The fine-tuned models are selected based on the held-out validation RA. Figure 3: The robust accuracy in cross-validation dataset w.r.t. different epochs.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers, such as library names or solvers (e.g., PyTorch version, TensorFlow version, CUDA version).
Experiment Setup Yes For contrastive pre-training, we identically follow Sim CLR [2] for all the optimizer settings, augmentation and projection head structure. We choose 512 for batch size and train for 1000 epochs. [...] We use SGD with 0.9 momentum and batch size 128. By default, we fine-tune 25 epochs, with initial learning rate set as 0.1 and then decaying by 10 times at epoch 15 and 20.