Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Modular Meta-Learning with Shrinkage

Authors: Yutian Chen, Abram L. Friesen, Feryal Behbahani, Arnaud Doucet, David Budden, Matthew Hoffman, Nando de Freitas

NeurIPS 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we demonstrate that our method discovers a small set of meaningful task-specific modules and outperforms existing metalearning approaches in domains like few-shot text-to-speech that have little task data and long adaptation horizons.
Researcher Affiliation Industry Deep Mind London, UK EMAIL
Pseudocode Yes Figure 1: (Left) Structure of a typical meta-learning algorithm. (Right) Bayesian shrinkage graphical model. The shared meta parameters φ serve as the initialization of the neural network parameters for each task θt. The σ are shrinkage parameters. By learning these, the model automatically decides which subsets of parameters (i.e., modules) to fix for all tasks and which to adapt at test time.
Open Source Code No The paper does not explicitly provide an unambiguous statement or link to its own open-source code for the methodology described.
Open Datasets Yes We use the augmented Omniglot protocol of Flennerhag et al. [4], which necessitates long-horizon adaptation.
Dataset Splits Yes We use 30 training alphabets (T = 30), 15 training images (K = 15), and 5 validation images per class.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or cloud instance specifications).
Software Dependencies No The paper mentions using specific algorithms like 'conjugate gradient algorithm' and 'Adam [46]', but does not provide specific software names with version numbers (e.g., 'PyTorch 1.9', 'TensorFlow 2.0').
Experiment Setup Yes Following Flennerhag et al. [4], we use a 4-layer convnet and perform 100 steps of task adaptation.