Meta-Exploiting Frequency Prior for Cross-Domain Few-Shot Learning

Authors: Fei Zhou, Peng Wang, Lei Zhang, Zhenghua Chen, Wei Wei, Chen Ding, Guosheng Lin, Yanning Zhang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we begin by providing a detailed description of the experimental configuration, encompassing pre-training, meta-training, and meta-testing. Following that, we analyze the advantages of the proposed method in comparison with state-of-the-art methods. Lastly, we delve into a comprehensive ablation study to further investigate the effectiveness of our approach.
Researcher Affiliation Academia 1 School of Computer Science, Northwestern Polytechnical University 2 School of Computer Science and Engineering, University of Electronic Science and Technology of China 3 Institute for Infocomm Research, and Centre for Frontier AI Research, A*STAR 4 School of Computer Science, Xi an University of Posts & Telecommunications 5 School of Computer Science and Engineering, Nanyang Technological University
Pseudocode Yes Algorithm 1: Meta-training algorithm of the proposed method.
Open Source Code Yes Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: The data and code will be released.
Open Datasets Yes Following the established setup Guo et al. [2020], Li et al. [2022], Zhou et al. [2023], we employ the base classes of the mini-Image Net Vinyals et al. [2016] as the source domain dataset. Our model is evaluated across multiple target domains, encompassing natural image domains (CUB, Cars, Places, Plantae), remote sensing domain (Euro SAT), agricultural domain (Crop Disease), and medical domains (Chest X, ISIC).
Dataset Splits Yes In each epoch, we randomly sample 100 meta-tasks, where each meta-task consists of 5-way 5-shot 15-query. [...] Specifically, for each target domain, we randomly sample 600 meta-tasks for testing. We consider two challenging meta-tasks: a 5-way 1-shot 15-query task and a 5-way 5-shot 15-query task.
Hardware Specification Yes All experiments were performed on a 4090 GPU. Our experimental platform is a 4090 GPU.
Software Dependencies No The paper mentions using "Adam as the optimizer" and "Res Net-10 as the feature embedding network" but does not specify version numbers for these or other software libraries (e.g., PyTorch version, Python version).
Experiment Setup Yes During the meta-training phase, we employ Adam as the optimizer and conduct meta-training for 50 epochs with a learning rate of 0.001. In each epoch, we randomly sample 100 meta-tasks, where each meta-task consists of 5-way 5-shot 15-query. Data augmentation techniques such as "Resize," "Image Jitter," and "Random Horizontal Flip" are applied during meta-training. We set hyper-parameters m1=0.997 and m2=0.999.