Cross-Level Distillation and Feature Denoising for Cross-Domain Few-Shot Classification

Authors: Hao ZHENG, Runqi Wang, Jianzhuang Liu, Asako Kanezaki

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted to verify that our proposed CLD and FD can achieve state-of-the-art results on the BSCD-FSL benchmark with large domain gaps.
Researcher Affiliation Collaboration Hao ZHENG Tokyo Institute of Technology zheng.h.ad@m.titech.ac.jp Runqi Wang Huawei Noah s Ark Lab runqiwangstu@hotmail.com Jianzhuang Liu Huawei Noah s Ark Lab liu.jianzhuang@huawei.com Asako Kanezaki Tokyo Institute of Technology kanezaki@c.titech.ac.jp
Pseudocode No The paper describes its methods using text and figures, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The implementation code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/CLDFD.
Open Datasets Yes The mini Image Net dataset (Vinyals et al., 2016) serves as the base dataset DB which has sufficient labeled images. Euro SAT (Helber et al., 2019), Crop Disease (Mohanty et al., 2016), ISIC (Codella et al., 2019) and Chest X (Wang et al., 2017) in BSCD-FSL are the unlabeled target datasets DT.
Dataset Splits No The paper mentions that 'The remaining 80% of target images are utilized for fine-tuning and evaluating by building 5-way K-shot tasks,' which implies support sets for fine-tuning a classifier. However, it does not explicitly define a distinct 'validation' dataset split for hyperparameter tuning during the main model training, only that hyperparameters were determined on the Euro SAT dataset as a whole.
Hardware Specification No The paper states, 'We gratefully acknowledge the support of Mind Spore (Mindspore), CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research.' However, it does not specify concrete hardware details such as specific GPU/CPU models, processor types, or memory amounts.
Software Dependencies No The paper mentions 'We implement our model using the Mind Spore Lite tool (Mindspore)' but does not provide specific version numbers for Mind Spore Lite or any other software dependencies.
Experiment Setup Yes Our model is optimized by SGD with the momentum 0.9, weight decay 1e-4, and batch size 32 for 600 epochs. The learning rate is 0.1 at the beginning and decays by 0.1 after the 300th epoch and 500th epoch. The hyper-parameters λ in Equation 3, and h and β in Equation 6 are set to 2, 64, and 0.4, respectively.