Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks

Authors: Hae Beom Lee, Hayeon Lee, Donghyun Na, Saehoon Kim, Minseop Park, Eunho Yang, Sung Ju Hwang

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our Bayesian Task-Adaptive Meta-Learning (Bayesian TAML) on multiple realistic taskand class-imbalanced datasets, on which it significantly outperforms existing meta-learning approaches. Further ablation study confirms the effectiveness of each balancing component and the Bayesian learning framework. 5 EXPERIMENTS
Researcher Affiliation Collaboration KAIST1, Tmax Data2, AITRICS3, South Korea {haebeom.lee, hayeon926, eunhoy, sjhwang82}@kaist.ac.kr donghyun_na@tmax.co.kr, {shkim, mike_seop}@aitrics.com
Pseudocode No The paper describes the proposed model and inference process using mathematical equations and descriptive text, but it does not include a clearly labeled "Pseudocode" or "Algorithm" block.
Open Source Code Yes Code is available at https://github.com/haebeom-lee/l2b
Open Datasets Yes Datasets We validate our method on the following benchmark datasets. CIFAR-FS: This dataset (Bertinetto et al., 2019) is a variant of CIFAR-100 dataset... mini Image Net: This dataset (Vinyals et al., 2016) is a subset of the Image Net dataset... SVHN: This dataset (Netzer et al., 2011) is frequently used as an OOD dataset...
Dataset Splits Yes We split the dataset into 64/16/20 classes for training/validation/test. Aircraft: We split this dataset (Maji et al., 2013) into 70/15/15 classes for metatraining/validation/test with 100 examples for each class.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments (e.g., CPU/GPU models, memory, etc.).
Software Dependencies No The paper mentions using PyTorch for implementation but does not specify its version number or any other software dependencies with their respective versions.
Experiment Setup Yes We set the number of inner-gradient steps to 5 for meta-training and 10 for meta-testing, for all the models that take inner-gradient steps. We meta-train all models for total 50K iterations with meta-batch size set to 4. The outer learning rate is set to 0.001 for all the baselines and our models.