Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Authors: Yifan Zhang, Bryan Hooi, Dapeng Hu, Jian Liang, Jiashi Feng
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on image classification and semantic segmentation verify the effectiveness of Core-tuning. |
| Researcher Affiliation | Collaboration | Yifan Zhang1 Bryan Hooi1 Dapeng Hu1 Jian Liang2 Jiashi Feng3 1National University of Singapore 2Chinese Academy of Sciences 3SEA AI Lab |
| Pseudocode | Yes | The pseudo code is provided in the supplementary. |
| Open Source Code | Yes | The source code of Core-tuning is available at: https://github.com/Vanint/Core-tuning. |
| Open Datasets | Yes | Image Net20 (a subset of Image Net with 20 classes), CIFAR10, CIFAR100 [29], Caltech-101 [15], DTD [10], FGVC Aircraft [39], Standard Cars [28], Oxford-IIIT Pets [44] and Oxford 102 Flowers [42]. |
| Dataset Splits | Yes | evaluated on val2012 set. |
| Hardware Specification | No | The paper mentions 'SGD based on two GPUs' but does not specify the type or model of GPUs or any other hardware components used for experiments. |
| Software Dependencies | No | The paper states 'We implement Core-tuning in Py Torch' but does not provide specific version numbers for PyTorch or other software dependencies. |
| Experiment Setup | Yes | Following [6], we perform parameter tuning for η and α from {0.1, 1, 10} on each dataset. Moreover, we set the temperature τ=0.07. To make the generated negative pairs closer to negatives, we clip λ Beta(α, α) by λ λn when generating hard negative pairs, where λn is a threshold and we set it to 0.8. |