On the Effectiveness of Out-of-Distribution Data in Self-Supervised Long-Tail Learning.

Authors: Jianhong Bai, Zuozhu Liu, Hualiang Wang, Jin Hao, YANG FENG, Huanpeng Chu, Haoji Hu

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on various datasets and several state-of-the-art SSL frameworks to verify the effectiveness of the proposed method. The results show that our method significantly improves the performance of SSL on long-tailed datasets by a large margin, and even outperforms previous work which uses external ID data. Our code is available at https://github.com/Jianhong Bai/COLT.
Researcher Affiliation Collaboration Jianhong Bai1 , Zuozhu Liu1 , Hualiang Wang2, Jin Hao3, Yang Feng4, Huanpeng Chu1, Haoji Hu1 1Zhejiang University, 2The Hong Kong University of Science and Technology, 3Harvard University, 4Angelalign Technology
Pseudocode Yes Algorithm 1 The overall pipeline of COLT. Input: ID train set Sid, OOD dataset Sood, sample budget K, train epoch T, momentum coefficient m, warm-up epochs w, sample interval r, cluster number C, hyper-parameter k, τc. Output: pre-trained model parameter θT .
Open Source Code Yes Our code is available at https://github.com/Jianhong Bai/COLT.
Open Datasets Yes We conduct experiments on four popular datasets. CIFAR-10-LT/CIFAR-100-LT are long-tail subsets sampled from the original CIFAR10/CIFAR100 (Cui et al., 2019a). ... Image Net-100-LT is proposed by (Jiang et al., 2021b) with 12K images sampled from Image Net-100 (Tian et al., 2020) with Pareto distribution. ... Places-LT (Liu et al., 2019) contains about 62.5K images sampled from the large-scale scene-centric Places dataset (Zhou et al., 2017) with Pareto distribution.
Dataset Splits No The paper discusses evaluation protocols (linear-probing and few-shot) where “full dataset” or “1% samples” are used for fine-tuning a classifier, and reports “Test accuracy (%).” However, it does not explicitly provide percentages or counts for a general train/validation/test split for the primary self-supervised training phase.
Hardware Specification Yes We implement all our techniques using Py Torch (Paszlke et al., 2017) and conduct the experiments using RTX3090 GPUs.
Software Dependencies No The paper mentions “Py Torch (Paszke et al., 2017)” but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes We evaluate our method with Sim CLR (Chen et al., 2020) framework with batch size 512 for small datasets (CIFAR-10-LT/CIFAR-100-LT) and 256 for large datasets (Image Net-100-LT/Places-LT) in default. We pre-train all the baselines and COLT with 2000 epochs on CIFAR10/100, 1000 epochs on Image Net-100, 500 epochs on Places. As for the fine-tuning stage, the linear-probing and few-shot results are produced by fine-tuning the classifier for 30 epochs and 100 epochs, respectively. ... We sample K = 10, 000 OOD images on every r = 25 epoch for CIFAR-10-LT/CIFAR-100-LT, Places-LT, and r = 50 for Image Net-100-LT.