Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Probabilistic Model-Agnostic Meta-Learning

Authors: Chelsea Finn, Kelvin Xu, Sergey Levine

NeurIPS 2018 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments The goal of our experimental evaluation is to answer the following questions: (1) can our approach enable sampling from the distribution over potential functions underlying the training data?, (2) does our approach improve upon the MAML algorithm when there is ambiguity over the class of functions?, and (3) can our approach scale to deep convolutional networks? We study two illustrative toy examples and a realistic ambiguous few-shot image classification problem. For the both experimental domains, we compare MAML to our probabilistic approach.
Researcher Affiliation Academia Chelsea Finn , Kelvin Xu , Sergey Levine UC Berkeley EMAIL
Pseudocode Yes Algorithm 1 Meta-training, differences from MAML in red; Algorithm 2 Meta-testing
Open Source Code Yes Additional qualitative results and code can be found at https://sites.google.com/view/probabilistic-maml/
Open Datasets Yes We additionally provide a comparison on the Mini Imagenet benchmark and specify the hyperparameters in the supplementary appendix. We study an ambiguous extension to the celeb A attribute classification task.
Dataset Splits No Meta-learning algorithms proceed by sampling data from a given task, and splitting the sampled data into a set of a few datapoints, Dtr used for training the model and a set of datapoints for measuring whether or not training was effective, Dtest. The paper describes the use of Dtr and Dtest but does not provide specific train/validation/test dataset split percentages or counts.
Hardware Specification No The paper does not explicitly describe the hardware used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup No We additionally provide a comparison on the Mini Imagenet benchmark and specify the hyperparameters in the supplementary appendix. The specific hyperparameters are not provided in the main text.