MetaNorm: Learning to Normalize Few-Shot Batches Across Domains

Authors: Yingjun Du, Xiantong Zhen, Ling Shao, Cees G. M. Snoek

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We verify its effectiveness by extensive evaluation on representative tasks suffering from the small batch and domain shift problems: few-shot learning and domain generalization. We further introduce an even more challenging setting: few-shot domain generalization. Results demonstrate that Meta Norm consistently achieves better, or at least competitive, accuracy compared to existing batch normalization methods. 4 EXPERIMENTAL RESULTS We conduct an extensive set of experiments on a total of 17 datasets containing more than 15 million images.
Researcher Affiliation Collaboration Yingjun Du1, Xiantong Zhen1,2, Ling Shao2, Cees G. M. Snoek1 1AIM Lab, University of Amsterdam 2Inception Institute of Artificial Intelligence
Pseudocode Yes In this Appendix we provide the detailed Meta Norm algorithm descriptions to conduct batch normalization for few-shot classification (Algorithm 1), domain generalization (Algorithm 2) and few-shot domain generalization (Algorithm 3).
Open Source Code Yes Our code will be publicly released.1 https://github.com/YDU-AI/Meta Norm.
Open Datasets Yes mini Image Net. The mini Image Net is originally proposed in (Vinyals et al., 2016) and has been widely used for evaluating few-shot learning algorithms. Omniglot. Omniglot (Lake et al., 2015) is a few-shot learning dataset... PACS (Li et al., 2017a) contains a total of 9,991 images...
Dataset Splits Yes We follow the train/val/ test split introduced in (Ravi & Larochelle, 2017), which uses 64 classes for meta-training, 16 classes for meta-validation, and the remaining 20 classes for meta-testing.
Hardware Specification Yes We implemented all models in the Tensorflow framework and tested on an NVIDIA Tesla V100.
Software Dependencies No The paper mentions 'implemented all models in the Tensorflow framework' but does not provide specific version numbers for TensorFlow or any other software dependencies.
Experiment Setup Yes For MAML experiments, we used the codebase by Finn (Finn, 2017). We use the Adam optimizer with default parameters, and a meta batch size of 4 tasks. The number of test episodes is set as 600. The number of training iterations is 60,000. We set λ=0.001.