Unsupervised Meta-Learning for Few-Shot Image Classification

Authors: Siavash Khodadadeh, Ladislau Boloni, Mubarak Shah

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On the Omniglot and Mini-Imagenet few-shot learning benchmarks, UMTRA outperforms every tested approach based on unsupervised learning of representations, while alternating for the best performance with the recent CACTUs algorithm.
Researcher Affiliation Academia Siavash Khodadadeh, Ladislau Bölöni Dept. of Computer Science University of Central Florida siavash.khodadadeh@knights.ucf.edu, lboloni@cs.ucf.edu Mubarak Shah Center for Research in Computer Vision University of Central Florida shah@crcv.ucf.edu
Pseudocode Yes Algorithm 1: Unsupervised Meta-learning with Tasks constructed by Random sampling and Augmentation (UMTRA)
Open Source Code No The paper does not contain any statement about releasing source code for the described methodology or a direct link to a code repository.
Open Datasets Yes On the Omniglot and Mini-Imagenet few-shot learning benchmarks... Omniglot [17] is a dataset of handwritten characters frequently used to compare few-shot learning algorithms... The Mini-Imagenet dataset was introduced by [27] as a subset of the Image Net dataset [9], suitable as a benchmark for few-shot learning algorithms.
Dataset Splits Yes To allow comparisons with other published results, in our experiments we follow the experimental protocol described in [28]: 1200 characters were used for training, 100 characters were used for validation and 323 characters were used for testing.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for running the experiments. It generally discusses neural networks but does not specify the hardware infrastructure.
Software Dependencies No The paper mentions implementing the classifier and using existing frameworks (e.g., MAML) but does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes In our experiments, we realized two of the most important hyperparameters in meta-learning are meta-batch size, NMB, and number of updates, NU. In table 2, we study the effects of these hyperparameters on the accuracy of the network for the randomly zeroed pixels and random shift augmentation. Based on this experiment, we decide to fix the meta-batch size to 25 and number of updates to 1.