Cross-Domain Few-Shot Classification via Adversarial Task Augmentation

Authors: Haoqing Wang, Zhi-Hong Deng

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments under the crossdomain setting, using nine few-shot classification datasets: mini-Image Net, CUB, Cars, Places, Plantae, Crop Diseases, Euro SAT, ISIC and Chest X. Experimental results show that our method can effectively improve the few-shot classification performance of the meta-learning models under domain shift, and outperforms the existing works.
Researcher Affiliation Academia School of Electronics Engineering and Computer Science, Peking University, Beijing, China wanghaoqing@pku.edu.cn , zhdeng@pku.edu.cn
Pseudocode Yes Algorithm 1 Adversarial Task Augmentation
Open Source Code Yes Our code is available at https://github.com/Haoqing-Wang/ CDFSL-ATA.
Open Datasets Yes We conduct extensive experiments under the crossdomain setting, using nine few-shot classification datasets: mini-Image Net, CUB, Cars, Places, Plantae, Crop Diseases, Euro SAT, ISIC and Chest X, which are introduced by [Tseng et al., 2020] and [Guo et al., 2020]. Each dataset consists of train/val/test splits and please refer to these references for more details.
Dataset Splits Yes Each dataset consists of train/val/test splits and please refer to these references for more details. We use the mini Image Net domain as the single source domain, and evaluate the trained model on the other eight domains. We select the model parameters with the best accuracy on the validation set of the mini-Image Net for model evaluation.
Hardware Specification No The paper states 'In all experiments, we use the Res Net-10... and use the Adam optimizer', but does not specify any hardware details like GPU/CPU models, memory, or specific computing platforms.
Software Dependencies No The paper mentions using 'Res Net-10 as the feature extractor' and the 'Adam optimizer', but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes In all experiments, we use the Res Net-10 [He et al., 2016] as the feature extractor and use the Adam optimizer with the learning rate α = 0.001. We find that setting Tmax = 5 or 10 is sufficient to obtain satisfactory results, and we choose the learning rate of the gradient ascent process β from {20, 40, 60, 80}. We set K = {1, 3, 5, 7, 11, 15} for all experiments and choose p from {0.5, 0.6, 0.7}. We evaluate the model in the 5-way 1-shot/5shot settings using 2,000 randomly sampled episodes with 16 query samples per class, and report the average accuracy (%) as well as 95% confidence interval.