On Enforcing Better Conditioned Meta-Learning for Rapid Few-Shot Adaptation

Authors: Markus Hiller, Mehrtash Harandi, Tom Drummond

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our comprehensive evaluations show that the proposed method significantly outperforms its unconstrained counterpart especially during initial adaptation steps, while achieving comparable or better overall results on several few-shot classification tasks creating the possibility of dynamically choosing the number of adaptation steps at inference time.
Researcher Affiliation Academia Markus Hiller1 Mehrtash Harandi2 Tom Drummond1 1School of Computing and Information Systems, The University of Melbourne 2Department of Electrical and Computer Systems Engineering, Monash University markus.hiller@student.unimelb.edu.au mehrtash.harandi@monash.edu tom.drummond@unimelb.edu.au
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No Code to be released upon acceptance
Open Datasets Yes We evaluate our method on all five popular few-shot classification datasets, namely mini Image Net (Vinyals et al., 2016; Ravi & Larochelle, 2017), tiered Image Net (Ren et al., 2018), CIFAR-FS (Bertinetto et al., 2019), FC100 (Oreshkin et al., 2018) and CUB-200-2011 (Wah et al., 2011), hereafter referred to as CUB .
Dataset Splits Yes Let Dtrain τ and Dval τ be the training and validation set of some given task τ ∼ p(T ) (e.g. image classification), respectively. and We follow previous work like Chen et al. (2019) and report the averaged test accuracies of 600 randomly sampled experiments using the k samples of the respective k-shot setting as support and 16 query samples for each class. and best model picked based on highest validation accuracy.
Hardware Specification Yes Our baselines and methods are trained on a single NVIDIA RTX 3090 for 60,000 episodes for 1-shot and 40,000 for 5-shot tasks...
Software Dependencies No No specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.y) are explicitly stated in the paper.
Experiment Setup Yes Our baselines and methods are trained on a single NVIDIA RTX 3090 for 60,000 episodes for 1-shot and 40,000 for 5-shot tasks, with the best model picked based on highest validation accuracy. We use 5 inner-loop updates and γ = 1.