Few-Shot Adaptation of Pre-Trained Networks for Domain Shift

Authors: Wenyu Zhang, Li Shen, Wanyue Zhang, Chuan-Sheng Foo

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on 5 cross-domain classification and 4 semantic segmentation datasets show that our method achieves more accurate and reliable performance than test-time adaptation, while not being constrained by streaming conditions.
Researcher Affiliation Academia Wenyu Zhang1 , Li Shen1 , Wanyue Zhang2 and Chuan-Sheng Foo1,3 1Institute for Infocomm Research, A*STAR 2Max Planck Institute for Informatics 3Centre for Frontier AI Research, A*STAR
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes We evaluate on image classification and segmentation tasks using publicly available benchmark datasets and compare our proposed method to existing source-free methods. For each dataset, our source models are the base models trained on source domain(s) using empirical risk minimization (ERM) or DG methods that have state-of-the-art performance on that dataset.
Dataset Splits No The paper mentions training, but does not explicitly provide details about specific training/validation/test splits, such as percentages, sample counts, or a cross-validation setup, in the main text.
Hardware Specification Yes Empirically, on Vis DA using a Tesla V100-SXM2 GPU, the average training time per epoch on a vanilla Res Net-101 is 3.18s, 3.56s and 4.32s for k = 1, 5 and 10 respectively.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup Yes We set m = 10 epochs in all our experiments. Support samples are augmented with the same data augmentations for source model training. We use a mini-batch size of 32 for classification and 1 for segmentation, and use the Adam optimizer with 0.001 learning rate for finetuning LCCS.