MemREIN: Rein the Domain Shift for Cross-Domain Few-Shot Learning
Authors: Yi Xu, Lichen Wang, Yizhou Wang, Can Qin, Yulun Zhang, Yun Fu
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on five popular benchmark datasets demonstrate that Mem REIN well addresses the domain shift challenge, and significantly improves the performance up to 16.43% compared with state-of-the-art baselines. |
| Researcher Affiliation | Academia | Yi Xu1 , Lichen Wang1 , Yizhou Wang1 , Can Qin1 , Yulun Zhang2 and Yun Fu1 1Northeastern University 2ETH Z urich {xu.yi, wang.lich, wang.yizhou, qin.ca}@northeastern.edu, yulun100@gmail.com, yunfu@ece.neu.edu |
| Pseudocode | No | The paper describes the method using mathematical equations and a framework diagram (Figure 1), but does not include structured pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not provide any explicit statements about making its code open-source or include a link to a code repository. |
| Open Datasets | Yes | Five widely used datasets are used: mini Image Net [Ravi and Larochelle, 2017], CUB [Wah et al., 2011], Cars [Krause et al., 2013], Places [Zhou et al., 2017], and Plantae [Van Horn et al., 2018]. |
| Dataset Splits | Yes | We take the same leave-one-out setting which is applied in other baselines. Specifically, we select one dataset among CUB, Cars, Places, and Plantae as the target domain for testing, and using the remaining three datasets along with dataset mini-Image Net as the source domains for training. In each trial, we randomly sample Nw categories with Ns randomly selected images for each support set, and 16 images for the query set. |
| Hardware Specification | No | The paper mentions using ResNet-10 as the backbone network but does not specify any hardware details (e.g., GPU model, CPU type, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using Adam optimizer and ResNet-10, but it does not specify any software versions for libraries, frameworks, or programming languages. |
| Experiment Setup | Yes | In the training phase, we set λ = 0.1 and train 1000 trials for all the methods. In each trial, we randomly sample Nw categories with Ns randomly selected images for each support set, and 16 images for the query set. We use the Adam optimizer with the learning rate 0.001. |