Contrastive Learning with Adversarial Examples

Authors: Chih-Hui Ho, Nuno Nvasconcelos

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we discuss an experimental evaluation of adversarial contrastive learning. Experiments are performed on CIFAR10 [29], CIFAR100 [29] or tiny Imagenet [1], using three different contrastive loss baselines: the loss of (5) (denoted as Plain”), UEL [78] and Sim CLR [12]. Unless otherwise noted, a Resnet18 encoder is trained using Algorithm 1 with α = 1, standard Pytorch augmentation, and an adversarial batchnorm momentum of 0.011. Two evaluation protocols are used, both based on a downstream classification task using features extracted by the learned encoder. These are implemented with a k = 200 nearest neighbor (k NN) classifier, and a logistic regression layer (LR). The encoder is trained with batch size 256 (128) and LR is trained for 1000 (200) epochs for CIFAR10 and CIRFAR100 (tiny Imagenet). See supplementary for more details. Table 1: Downstream classification accuracy for three SSL methods, with and without (ϵ = 0) adversarial augmentation, on different datasets. Figure 5: Ablation study of (a) batch sizes ,(b) embedding dimensions and (c) Res Net architectures.
Researcher Affiliation Academia Chih-Hui Ho Nuno Vasconcelos Department of Electrical and Computer Engineering University of California, San Diego {chh279, nvasconcelos}@ucsd.edu
Pseudocode Yes Algorithm 1 Pseudocode of contrastive learning with adversarial example (CLAE) in a batch
Open Source Code No The paper does not provide an explicit statement or link for the source code of the described methodology.
Open Datasets Yes Experiments are performed on CIFAR10 [29], CIFAR100 [29] or tiny Imagenet [1]
Dataset Splits No The paper describes training and evaluation protocols but does not explicitly specify validation dataset splits (e.g., percentages, sample counts, or explicit mention of a validation set) for reproducibility.
Hardware Specification No The paper mentions 'NVIDIA GPU donations' and 'the Nautilus platform' in the acknowledgements, but these are too general and do not provide specific hardware models, processor types, or detailed specifications used for running the experiments.
Software Dependencies No The paper mentions 'standard Pytorch augmentation' and 'Tensorflow implementation' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Unless otherwise noted, a Resnet18 encoder is trained using Algorithm 1 with α = 1, standard Pytorch augmentation, and an adversarial batchnorm momentum of 0.011. Two evaluation protocols are used, both based on a downstream classification task using features extracted by the learned encoder. These are implemented with a k = 200 nearest neighbor (k NN) classifier, and a logistic regression layer (LR). The encoder is trained with batch size 256 (128) and LR is trained for 1000 (200) epochs for CIFAR10 and CIRFAR100 (tiny Imagenet).