Domain Generalization by Learning and Removing Domain-specific Features

Authors: Yu Ding, Lei Wang, Bin Liang, Shuming Liang, Yang Wang, Fang Chen

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our framework achieves superior performance compared with state-of-the-art methods.
Researcher Affiliation Academia Yu Ding University of Wollongong yd624@uowmail.edu.au Lei Wang University of Wollongong leiw@uow.edu.au Bin Liang University of Technology Sydney Bin.Liang@uts.edu.au Shuming Liang University of Technology Sydney Shuming.Liang@uts.edu.au Yang Wang University of Technology Sydney Yang.Wang@uts.edu.au Fang Chen University of Technology Sydney Fang.Chen@uts.edu.au
Pseudocode No The paper does not include any pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/yulearningg/LRDG.
Open Datasets Yes We evaluate our framework on three object recognition datasets for domain generalization. PACS [27]... VLCS [39]... Office-Home [40]...
Dataset Splits No The source datasets are split into a training set and a validation set. The learning rate is decided by the validation set. However, the paper does not specify the exact percentages or counts for these splits in the main text.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory) used for running its experiments in the main text.
Software Dependencies No The paper mentions software components like 'Alex Net', 'Res Net18', 'Res Net50', 'U-net', 'Stochastic Gradient Descent', 'cross-entropy loss', 'entropy loss', and 'pixel-wise l2 loss' but does not provide specific version numbers for any of them.
Experiment Setup Yes We set λ1 = 1 for all the experiments. We give equal weight to the classification loss and the uncertainty loss for training the domain-specific classifiers. For λ2 and λ3, we follow the literature [13, 4] and directly use the leave-one-domain-out cross-validation to select their values. ... Alex Net and Res Net are pre-trained by Image Net [37] for all the experiments.