Contrastive Meta-Learning for Partially Observable Few-Shot Learning

Authors: Adam Jelley, Amos Storkey, Antreas Antoniou, Sam Devlin

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on an adaptation of a comprehensive few-shot learning benchmark, Meta-Dataset, and demonstrate the benefits of POEM over other meta-learning methods at representation learning from partial observations.
Researcher Affiliation Collaboration Adam Jelley1, Amos Storkey1, Antreas Antoniou1, Sam Devlin2 1School of Informatics, University of Edinburgh, 2 Microsoft Research, Cambridge
Pseudocode Yes Full pseudocode for training POEM with this objective is provided in appendix A.3.
Open Source Code Yes Implementation code is available at https://github.com/AdamJelley/POEM
Open Datasets Yes To comprehensively evaluate our approach, we adapt a large-scale few-shot learning benchmark, Meta-Dataset (Triantafillou et al., 2020)...
Dataset Splits Yes on all of which our models were trained, validated and tested on according to the data partitions specified by the Meta-Dataset benchmark.
Hardware Specification Yes The Finetuning, Prototypical Network and POEM baselines were run on on-premise RTX2080 GPUs. MAML required more memory and compute than available, so was run on cloud A100s.
Software Dependencies No The paper mentions 'Py Torch (Paszke et al., 2019)' and 'Torchvision' but does not provide specific version numbers for these software components.
Experiment Setup Yes Parameters used for adapted PO-Meta-Dataset are provided in Table A.5. All parameters not listed chosen to match Meta-Dataset defaults. All augmentations are applied using Torchvision, with parameters specified.