Memory Efficient Meta-Learning with Large Images

Authors: John Bronskill, Daniela Massiceti, Massimiliano Patacchiola, Katja Hofmann, Sebastian Nowozin, Richard Turner

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we demonstrate that meta-learners trained with LITE achieve state-of-the-art performance among meta-learners on two challenging few-shot classification benchmarks: (i) ORBIT [14] which is a real-world few-shot object recognition dataset for teachable object recognizers; and (ii) VTAB+MD [11] which is composed of the Visual Task Adaptation Benchmark (VTAB) [20] and Meta-Dataset (MD) [13] and combines both few-shot and transfer learning tasks.
Researcher Affiliation Collaboration John Bronskill University of Cambridge jfb54@cam.ac.uk Daniela Massiceti Microsoft Research dmassiceti@microsoft.com Massimiliano Patacchiola University of Cambridge mp2008@cam.ac.uk Katja Hofmann Microsoft Research kahofman@microsoft.com Sebastian Nowozin Microsoft Research senowoz@microsoft.com Richard E. Turner University of Cambridge ret26@cam.ac.uk
Pseudocode Yes Algorithm 1 LITE for a meta-training task τ
Open Source Code Yes 2Source code for ORBIT experiments is available at https://github.com/microsoft/ORBIT-Dataset and for the VTAB+MD experiments at https://github.com/cambridge-mlg/LITE.
Open Datasets Yes ORBIT [14] which is a real-world few-shot object recognition dataset for teachable object recognizers; and (ii) VTAB+MD [11] which is composed of the Visual Task Adaptation Benchmark (VTAB) [20] and Meta-Dataset (MD) [13]
Dataset Splits Yes The benchmark splits data collectors into disjoint train, validation, and test user sets along with their corresponding objects and videos.
Hardware Specification Yes Simple CNAPS + LITE trains in about 20 hours on a single 16GB GPU.
Software Dependencies No The paper mentions "PyTorch" and "TensorFlow Datasets" but does not provide specific version numbers for these or any other software dependencies required to replicate the experiments.
Experiment Setup Yes Experiments We meta-train Proto Nets [3], CNAPS [4] and Simple CNAPs [5] with LITE on tasks composed of large (224 224) images... For each model, we consider a Res Net-18 (RN-18) and Efficient Net-B0 (EN-B0) feature extractor, both pre-trained on Image Net [29]. We follow the task sampling protocols described in [14] (see Appendices B and C.1 for details).