Online Meta-Learning

Authors: Chelsea Finn, Aravind Rajeswaran, Sham Kakade, Sergey Levine

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental evaluation on three different largescale problems suggest that the proposed algorithm significantly outperforms alternatives based on traditional online learning approaches.
Researcher Affiliation Academia 1UC Berkeley 2University of Washington. Correspondence to: Chelsea Finn <cbfinn@stanford.edu>, Aravind Rajeswaran <aravraj@cs.washington.edu>.
Pseudocode Yes Algorithm 1 Online Meta-Learning with FTML
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes The experiments involve vision-based sequential learning tasks with the MNIST, CIFAR-100, and PASCAL 3D+ datasets.
Dataset Splits No The paper mentions 'held-out data Dtest_t' for evaluation and 'meta-training tasks' but does not specify explicit percentages or sample counts for train/validation/test splits, nor does it explicitly detail a 'validation' set split.
Hardware Specification No The paper does not provide any specific details regarding the hardware (e.g., GPU/CPU models, memory, or specific computing platforms) used to conduct the experiments.
Software Dependencies No The paper mentions using 'Adam (Kingma & Ba, 2015)', 'standard automatic differentiation libraries', and the 'Mu Jo Co physics engine (Todorov et al., 2012)' but does not provide specific version numbers for any of these software dependencies.
Experiment Setup Yes For Rainbow MNIST, 'we set 90% classification accuracy as the proficiency threshold.' For pose prediction, 'set the proficiency threshold to an error of 0.05.' It also mentions 'Hyperparameters parameters ,', and that 'we meta-train with update minibatches of size at-most 25'.