Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization

Authors: Xilie Xu, Jingfeng ZHANG, Feng Liu, Masashi Sugiyama, Mohan S. Kankanhalli

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, our experimental results show that invariant regularization significantly improves the performance of state-of-the-art ACL methods in terms of both standard generalization and robustness on downstream tasks. Empirically, we conducted comprehensive experiments on various datasets including CIFAR-10 [31], CIFAR-100 [31], STL-10 [12], CIFAR-10-C [26], and CIFAR-100-C [26] to show the effectiveness of our proposed method in improving ACL methods [29, 22, 50, 36].
Researcher Affiliation Collaboration 1 School of Computing, National University of Singapore 2 RIKEN Center for Advanced Intelligence Project (AIP) 3 School of Computer Science, The University of Auckland 4 School of Computing and Information Systems, The University of Melbourne 5 Graduate School of Frontier Sciences, The University of Tokyo
Pseudocode Yes Algorithm 1 ACL with Adversarial Invariant Regularization (AIR)
Open Source Code Yes Our source code is at https://github.com/GodXuxilie/Enhancing_ACL_via_AIR.
Open Datasets Yes We conducted comprehensive experiments on various datasets including CIFAR-10 [31], CIFAR-100 [31], STL-10 [12], CIFAR-10-C [26], and CIFAR-100-C [26].
Dataset Splits No The paper mentions pre-training and finetuning procedures, and evaluating on test accuracy, but it does not explicitly describe the use of a validation set or specific train/validation/test splits with percentages or sample counts for its experiments.
Hardware Specification Yes We conducted all experiments on Python 3.8.8 (Py Torch 1.13) with NVIDIA RTX A5000 GPUs (CUDA 11.6).
Software Dependencies Yes We conducted all experiments on Python 3.8.8 (Py Torch 1.13) with NVIDIA RTX A5000 GPUs (CUDA 11.6).
Experiment Setup Yes We utilized Res Net-18 [25] as the representation extractor... We pre-trained Res Net-18 models using SGD for 1000 epochs with an initial learning rate of 5.0 and a cosine annealing schedule [35]. The batch size β is fixed as 512. The adversarial budget ϵ is set as 8/255.