Optimal Positive Generation via Latent Transformation for Contrastive Learning

Authors: Yinqi Li, Hong Chang, Bingpeng MA, Shiguang Shan, Xilin Chen

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that using COP-Gen to generate positives outperforms other latent transformation methods and even real-image-based methods in self-supervised contrastive learning.
Researcher Affiliation Academia Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences
Pseudocode Yes We gave the pseudo code in Appendix A and experimental details in Section 4 and Appendix C.
Open Source Code Yes Code and models are available at: https://github.com/Li Yinqi/COP-Gen
Open Datasets Yes Most of our self-supervised experiments are conducted using Image Net ILSVRC-2012 [38] pretrained Big Bi GAN [39, 48]... leveraging MNIST [49] pretrained Residual Flow [43] model... Object Detection on PASCAL VOC [58]... Food101 [59], CIFAR10 [60], CIFAR100 [60], SUN397 [61], Pets [62], Caltech-101 [63], and Flowers [64].
Dataset Splits Yes We report Top-1 and Top-5 classification accuracies on Image Net-1K validation set.
Hardware Specification Yes takes about 1 hour on 4 NVIDIA 2080 Ti GPUs.
Software Dependencies No We use Pytorch [47] for all experiments. While PyTorch is mentioned, a specific version number for it or any other key software dependency is not provided.
Experiment Setup Yes The temperature set to 0.1. Adam [52] (β1 = 0.5, β2 = 0.999) is used as the optimizer, where the learning rate is set to 3 10 5 for f and 1 10 5 for Tz. We train over 200K generated samples with batch size of 176... We use Info NCE as the loss, which is optimized using SGD with momentum of 0.9, the learning rate of 0.03 Batch Size/256, and weight decay of 10 4. We train with batch size of 224 for 100 epochs and decay the learning rate using the cosine decay schedule [55].