Continuous Meta-Learning without Tasks

Authors: James Harrison, Apoorva Sharma, Chelsea Finn, Marco Pavone

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We investigate the performance of MOCA in five problem settings: three in regression and two in classification. Our primary goal is to characterize how effectively MOCA can enable meta-learning algorithms to perform without access to task segmentation. We compare against baseline sliding window models, which again use the same meta-learning algorithm, but always condition on the last n data points, for n 2 {5, 10, 50}. Performance of MOCA against baselines is presented in Fig. 3 for all problem domains.
Researcher Affiliation Academia James Harrison, Apoorva Sharma, Chelsea Finn, Marco Pavone Stanford University, Stanford, CA {jharrison, apoorva, cbfinn, pavone}@stanford.edu
Pseudocode Yes Algorithm 1 Meta-Learning via Online Changepoint Analysis
Open Source Code Yes Code is available at https://github.com/StanfordASL/moca
Open Datasets Yes Rainbow MNIST dataset of [11].mini Image Net benchmark task [44].
Dataset Splits No The paper mentions sampling time series from training data but does not provide specific percentages or counts for training, validation, or test splits. It refers to 'training data' and 'test time' but not explicit data partitioning for reproducibility.
Hardware Specification No The paper does not specify any hardware used for running experiments, such as particular GPU or CPU models.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup No The paper describes some general aspects of the experimental process, such as processing time series sequentially and sampling shorter time series. However, it lacks specific experimental setup details like hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific optimizer settings, which are crucial for reproducibility.