Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection
Authors: Xilie Xu, Jingfeng ZHANG, Feng Liu, Masashi Sugiyama, Mohan S. Kankanhalli
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, our comprehensive results corroborate that RCS can speed up ACL by a large margin without significantly hurting the robustness transferability. Notably, to the best of our knowledge, we are the first to conduct ACL efficiently on the largescale Image Net-1K dataset to obtain an effective robust representation via RCS. |
| Researcher Affiliation | Academia | Xilie Xu1 , Jingfeng Zhang2,3 , Feng Liu4, Masashi Sugiyama2,5, Mohan Kankanhalli1 1 School of Computing, National University of Singapore 2 RIKEN Center for Advanced Intelligence Project (AIP) 3 School of Computer Science, The University of Auckland 4 School of Computing and Information Systems, The University of Melbourne 5 Graduate School of Frontier Sciences, The University of Tokyo |
| Pseudocode | Yes | Algorithm 1 Robustness-aware Coreset Selection (RCS) Algorithm 2 Efficient ACL via RCS |
| Open Source Code | Yes | Our source code is at https://github.com/GodXuxilie/Efficient_ACL_via_RCS. |
| Open Datasets | Yes | Notably, to the best of our knowledge, we are the first to conduct ACL efficiently on the largescale Image Net-1K dataset to obtain an effective robust representation via RCS. [1] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A largescale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248 255. Ieee, 2009. |
| Dataset Splits | Yes | Therefore, given an unlabeled training set X X N and an unlabeled validation set U X M (M N), our proposed Robustness-aware Coreset Selection (RCS) searches for a coreset S such that S = arg min S X,|S|/|X| k LRD(U; arg min θ LACL(S; θ)). |
| Hardware Specification | Yes | We conducted all experiments on Python 3.8.8 (Py Torch 1.13) with 4 NVIDIA RTX A5000 GPUs (CUDA 11.6). |
| Software Dependencies | Yes | We conducted all experiments on Python 3.8.8 (Py Torch 1.13) with 4 NVIDIA RTX A5000 GPUs (CUDA 11.6). |
| Experiment Setup | Yes | Efficient pre-training configurations. We leverage RCS to speed up ACL [14] and Dyn ACL [17] using Res Net-18 backbone networks. The pre-training settings of ACL and Dyn ACL exactly follow their original paper and we provide the details in Appendix B.2. For the hyperparameters of RCS, we set β = 512, η = 0.01, and TRCS = 3. We took W = 100 epochs for warmup, and then CS was executed every I = 20 epoch. We used different subset fractions k {0.05, 0.1, 0.2} for CS. |