Adversarial Self-Supervised Contrastive Learning

Authors: Minseon Kim, Jihoon Tack, Sung Ju Hwang

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our method, Robust Contrastive Learning (Ro CL), on multiple benchmark datasets, on which it obtains comparable robust accuracy over state-of-the-art supervised adversarial learning methods, and significantly improved robustness against the black box and unseen types of attacks.
Researcher Affiliation Collaboration Minseon Kim1, Jihoon Tack1, Sung Ju Hwang1,2 KAIST1, AITRICS2 {minseonkim, jihoontack, sjhwang82}@kaist.ac.kr
Pseudocode Yes Algorithm 1 Robust Contrastive Learning (Ro CL)
Open Source Code Yes The code to reproduce the experimental results is available at https://github.com/Kim-Minseon/Ro CL.
Open Datasets Yes We validate our method... on multiple benchmark datasets (CIFAR-10 and CIFAR-100)
Dataset Splits No The paper mentions training on CIFAR-10 and CIFAR-100 datasets but does not explicitly provide specific details about the training/validation/test splits, such as percentages or sample counts, nor does it cite standard splits explicitly within the main text regarding data partitioning for reproduction.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions) used for the experiments.
Experiment Setup Yes For all baselines and our method, we train with ℓ attacks with the same attack strength of ϵ = 8/255. All ablation studies are conducted with Res Net18 trained on CIFAR-10, with the attack strength of ϵ = 8/255. Regarding the additional results on CIFAR-100 and details of the optimization & evaluation, please see the Appendix A, and C.