Un-mix: Rethinking Image Mixtures for Unsupervised Visual Representation Learning

Authors: Zhiqiang Shen, Zechun Liu, Zhuang Liu, Marios Savvides, Trevor Darrell, Eric Xing2216-2224

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on CIFAR-10, CIFAR-100, STL-10, Tiny Image Net and standard Image Net-1K with popular unsupervised methods Sim CLR, BYOL, Mo Co V1&V2, Sw AV, etc.
Researcher Affiliation Collaboration 1 Carnegie Mellon University 2 Reality Labs, Meta Inc. 3 University of California, Berkeley 4 Mohamed bin Zayed University of Artificial Intelligence
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Code is publicly available at https://github.com/szq0214/Un-Mix.
Open Datasets Yes Extensive experiments are conducted on CIFAR-10, CIFAR-100, STL-10, Tiny Image Net and standard Image Net-1K. [...] CIFAR-10/100 [Krizhevsky and Hinton 2009] consist of tiny colored natural images [...]. Image Net-1K [Deng et al. 2009], aka ILSVRC 2012 classification dataset consists of 1000 classes, with a number of 1.28 million training images and 50K validation images.
Dataset Splits Yes Image Net-1K [Deng et al. 2009], aka ILSVRC 2012 classification dataset consists of 1000 classes, with a number of 1.28 million training images and 50K validation images.
Hardware Specification Yes For example, we use a mini-batch size of 256 with 8 NVIDIA V100 GPUs on Image Net-1K
Software Dependencies No The paper mentions implementing the method with 'Py Torch codes' but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes on CIFAR-10 and CIFAR-100, we train for 1,000 epochs with learning rate 3 10 3; on Tiny Image Net, 1,000 epochs with learning rate 2 10 3; on STL-10, 2,000 epochs with learning rate 2 10 3. We also apply warm-up for the first 500 iterations, and a 0.2 learning rate drop at 50 and 25 epochs before the end. [...] Unless otherwise stated, all the hyperparameter configurations strictly follow the baseline Mo Co V2 on Image Net-1K. For example, we use a mini-batch size of 256