Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Memory Efficient Meta-Learning with Large Images

Authors: John Bronskill, Daniela Massiceti, Massimiliano Patacchiola, Katja Hofmann, Sebastian Nowozin, Richard Turner

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we demonstrate that meta-learners trained with LITE achieve state-of-the-art performance among meta-learners on two challenging few-shot classification benchmarks: (i) ORBIT [14] which is a real-world few-shot object recognition dataset for teachable object recognizers; and (ii) VTAB+MD [11] which is composed of the Visual Task Adaptation Benchmark (VTAB) [20] and Meta-Dataset (MD) [13] and combines both few-shot and transfer learning tasks.
Researcher Affiliation Collaboration John Bronskill University of Cambridge EMAIL Daniela Massiceti Microsoft Research EMAIL Massimiliano Patacchiola University of Cambridge EMAIL Katja Hofmann Microsoft Research EMAIL Sebastian Nowozin Microsoft Research EMAIL Richard E. Turner University of Cambridge EMAIL
Pseudocode Yes Algorithm 1 LITE for a meta-training task τ
Open Source Code Yes 2Source code for ORBIT experiments is available at https://github.com/microsoft/ORBIT-Dataset and for the VTAB+MD experiments at https://github.com/cambridge-mlg/LITE.
Open Datasets Yes ORBIT [14] which is a real-world few-shot object recognition dataset for teachable object recognizers; and (ii) VTAB+MD [11] which is composed of the Visual Task Adaptation Benchmark (VTAB) [20] and Meta-Dataset (MD) [13]
Dataset Splits Yes The benchmark splits data collectors into disjoint train, validation, and test user sets along with their corresponding objects and videos.
Hardware Specification Yes Simple CNAPS + LITE trains in about 20 hours on a single 16GB GPU.
Software Dependencies No The paper mentions "PyTorch" and "TensorFlow Datasets" but does not provide specific version numbers for these or any other software dependencies required to replicate the experiments.
Experiment Setup Yes Experiments We meta-train Proto Nets [3], CNAPS [4] and Simple CNAPs [5] with LITE on tasks composed of large (224 224) images... For each model, we consider a Res Net-18 (RN-18) and Efficient Net-B0 (EN-B0) feature extractor, both pre-trained on Image Net [29]. We follow the task sampling protocols described in [14] (see Appendices B and C.1 for details).