Test-time Fourier Style Calibration for Domain Generalization

Authors: Xingchen Zhao, Chang Liu, Anthony Sicilia, Seong Jae Hwang, Yun Fu

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on several popular DG benchmarks and a segmentation dataset for medical images demonstrate that our method outperforms state-of-the-art methods. The proposed work is evaluated on multiple DG benchmarks with different backbones on image classification and medical image segmentation. Through extensive experiments, we demonstrate that our method significantly improves the generalizability of CNNs and outperforms multiple state-of-the-art methods.
Researcher Affiliation Academia Xingchen Zhao1 , Chang Liu1 , Anthony Sicilia2 , Seong Jae Hwang2 and Yun Fu 1 1Northeastern University 2University of Pittsburgh {zhao.xingc, liu.chang6}@northeastern.edu, {anthonysicilia, sjh95}@pitt.edu, yunfu@ece.neu.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include any explicit statement about providing open-source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes Our method is evaluated on three DG benchmarks: (1) PACS [Li et al., 2017]... (2) Office-Home [Venkateswara et al., 2017]... (3) MICCAI WMH Challenge [Kuijf et al., 2019]...
Dataset Splits Yes We follow the train-val split provided by [Li et al., 2017].
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes For PACS and Office Home, ...trained with the learning rate of 1e-3 for feature extractors, the learning rate of 1e-4 for classifiers, batch size of 64, and epochs of 50. ... For WMH Challenge, ...with the learning rate of 2e-4, batch size of 30, and epochs of 300. ... optimized by SGD with weight decay of 5e-4, and all learning rates are decayed by 0.1 after 80% of the epochs. For all experiments: For TF-Cal, we set the calibration strength η and τ to 0.5, and set the pcal to 0.5. For AAF activated with a probability of 0.5, we set δ Beta(0.2, 0.2).