Meta-Learning Bidirectional Update Rules

Authors: Mark Sandler, Max Vladymyrov, Andrey Zhmoginov, Nolan Miller, Tom Madams, Andrew Jackson, Blaise Agüera Y Arcas

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we describe experimental evaluation of update rules using BLUR . Our code uses tensorflow (Abadi et al., 2015) and Ja X (Bradbury et al., 2018) libraries. All our experiments run on GPU.
Researcher Affiliation Industry Mark Sandler 1 Max Vladymyrov 1 Andrey Zhmoginov 1 Nolan Miller 1 Andrew Jackson 1 Tom Madams 1 Blaise Ag uera y Arcas 1 1Google Research. Correspondence to: Mark Sandler <sandler@google.com>.
Pseudocode No The paper describes mathematical equations for the update rules but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes Code for the paper is available at https://github.com/google-research/google-research/tree/master/blur
Open Datasets Yes The tasks we use for training are and, xor, two-moon (Pedregosa et al., 2011) and several others. We use MNIST as a meta-training dataset. Specifically we used MNIST, a 10-class letter subset of E-MNIST (Cohen et al., 2017), Fashion MNIST (Xiao et al., 2017), and the full 62-category E-MNIST.
Dataset Splits No The paper mentions using 'held out datasets' for validation and specifies datasets like MNIST for 'meta-validation', but it does not provide specific split percentages, sample counts, or explicit details about how these datasets were partitioned for validation.
Hardware Specification No The paper only states 'All our experiments run on GPU' without providing specific details such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions using 'tensorflow', 'Ja X', and 'CMA-ES/pycma' libraries but does not provide specific version numbers for these software components.
Experiment Setup Yes We start with 8-identical randomly initialized genomes and train them for 10,000 steps with 10 unrolls. Then we increase the unroll number by 5 for each consecutive 10,000 steps and synchronize genomes across all runs. Each meta-learning step represents training the network on 15 batches of 128 inputs, then evaluating its accuracy on 20 batches of 128 inputs.