Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation

Authors: Hung-Yu Tseng, Hsin-Ying Lee, Jia-Bin Huang, Ming-Hsuan Yang

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments and ablation studies under the domain generalization setting using five few-shot classification datasets: mini-Image Net, CUB, Cars, Places, and Plantae. Experimental results demonstrate that the proposed feature-wise transformation layer is applicable to various metric-based models, and provides consistent improvements on the few-shot classification performance under domain shift.
Researcher Affiliation Collaboration Hung-Yu Tseng University of California, Merced htseng6@ucmerced.edu Hsin-Ying Lee University of California, Merced hlee246@ucmerced.edu Jia-Bin Huang Virginia Tech jbhuang@vt.edu Ming-Hsuan Yang University of California, Merced Google Research Yonsei University mhyang@ucmerced.edu
Pseudocode Yes Algorithm 1: Learning-to-Learn Feature-Wise Transformation.
Open Source Code Yes We make the source code and datasets public available to simulate future research in this field.1 https://github.com/hytseng0509/Cross Domain Few Shot
Open Datasets Yes We conduct experiments using five datasets: mini-Image Net (Ravi & Larochelle, 2017), CUB (Welinder et al., 2010), Cars (Krause et al., 2013), Places (Zhou et al., 2017), and Plantae (Van Horn et al., 2018). We use five few-shot classification datasets in all of our experiments: mini-Image Net, CUB, Cars, Places, and Plantae. We follow the setting in Ravi & Larochelle (2017) and Hilliard et al. (2018) to process mini-Image Net and CUB datasets. As for the other datasets, we manually process the dataset by random splitting the classes. The number of training, validation, testing categories for each dataset are summarized in Table 3.
Dataset Splits Yes The number of training, validation, testing categories for each dataset are summarized in Table 3.
Hardware Specification No The paper mentions using 'Res Net-10 model as the backbone network for our feature encoder E' but does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using public implementations and integrating specific components, such as 'the public implementation from Chen et al. (Chen et al., 2019a)' and 'the official implementation for graph convolutional network', but does not provide specific version numbers for any software or libraries.
Experiment Setup Yes We train the metric-based model and feature-wise transformation layers with a learning rate of 0.001 and 40, 000 iterations. For feature-wise transformation layers, we apply L2 regularization with a weight of 10-8. The number of inner iterations adopted in the learning-to-learn scheme is set to be 1.