Unsupervised Learning via Meta-Learning

Authors: Kyle Hsu, Sergey Levine, Chelsea Finn

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments across four image datasets indicate that our unsupervised meta-learning approach acquires a learning algorithm without any labeled data that is applicable to a wide range of downstream classification tasks, improving upon the embedding learned by four prior unsupervised learning methods.
Researcher Affiliation Academia Kyle Hsu University of Toronto kyle.hsu@mail.utoronto.ca Sergey Levine, Chelsea Finn University of California, Berkeley {svlevine,cbfinn}@eecs.berkeley.edu
Pseudocode Yes Algorithm 1 CACTUs for classification
Open Source Code Yes Links to code for the experiments can be found at https://sites.google.com/view/unsupervised-via-meta.
Open Datasets Yes Across four image datasets (MNIST, Omniglot, mini Image Net, and Celeb A)" and in Appendix G: "ILSVRC 2012 dataset’s training split (Russakovsky et al., 2015)
Dataset Splits Yes We partition each dataset into meta-training, meta-validation, and meta-testing splits.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It only mentions general setup like using 'architectures from prior work'.
Software Dependencies No The paper mentions optimizers like Adam (Kingma & Ba, 2014) and SGD, and states 'We build on the authors publicly available codebase found at https://github.com/cbfinn/maml.' and other similar statements for ProtoNets and embedding learning methods. However, it does not specify version numbers for any software dependencies like Python, PyTorch, TensorFlow, or specific library versions.
Experiment Setup Yes APPENDIX E HYPERPARAMETERS AND ARCHITECTURES, Table 5: MAML hyperparameter summary. Table 6: Proto Nets hyperparameter summary. Table 11: MAML hyperparameter summary for Image Net.