Meta-Learning Requires Meta-Augmentation

Authors: Janarthanan Rajendran, Alexander Irpan, Eric Jang

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that meta-augmentation produces large complementary benefits to recently proposed meta-regularization techniques. (Abstract) and Finally, we show the importance of meta-augmentation on a variety of benchmarks and meta-learning algorithms. (Section 1, Introduction) and 5 Experiments (Section 5).
Researcher Affiliation Collaboration Janarthanan Rajendran University of Michigan rjana@umich.edu Alex Irpan Google Brain alexirpan@google.com Eric Jang Google Brain ejang@google.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code and data available at https://github.com/google-research/google-research/tree/master/meta_augmentation.
Open Datasets Yes Few shot classification benchmarks such as Mini-Image Net [36]... Omniglot The Omniglot dataset [20]... Pascal3D Pose Regression We show that for the regression problem introduced by Yin et al. [37]...
Dataset Splits No The paper mentions 'meta-training and meta-test sets of tasks' and 'Val pre-update' and 'Val post-update' in figures, implying the use of a validation set for early stopping, but it does not specify explicit percentages or sample counts for the dataset splits.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper mentions experimental settings like '1-shot, 5-way classification' and refers to 'learning rate' and 'weight decay' as parameters, but it does not provide a comprehensive list of specific hyperparameter values or detailed system-level training configurations in the main text.