Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Domain Generalization by Learning and Removing Domain-specific Features
Authors: Yu Ding, Lei Wang, Bin Liang, Shuming Liang, Yang Wang, Fang Chen
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our framework achieves superior performance compared with state-of-the-art methods. |
| Researcher Affiliation | Academia | Yu Ding University of Wollongong EMAIL Lei Wang University of Wollongong EMAIL Bin Liang University of Technology Sydney EMAIL Shuming Liang University of Technology Sydney EMAIL Yang Wang University of Technology Sydney EMAIL Fang Chen University of Technology Sydney EMAIL |
| Pseudocode | No | The paper does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/yulearningg/LRDG. |
| Open Datasets | Yes | We evaluate our framework on three object recognition datasets for domain generalization. PACS [27]... VLCS [39]... Office-Home [40]... |
| Dataset Splits | No | The source datasets are split into a training set and a validation set. The learning rate is decided by the validation set. However, the paper does not specify the exact percentages or counts for these splits in the main text. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory) used for running its experiments in the main text. |
| Software Dependencies | No | The paper mentions software components like 'Alex Net', 'Res Net18', 'Res Net50', 'U-net', 'Stochastic Gradient Descent', 'cross-entropy loss', 'entropy loss', and 'pixel-wise l2 loss' but does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | We set λ1 = 1 for all the experiments. We give equal weight to the classification loss and the uncertainty loss for training the domain-specific classifiers. For λ2 and λ3, we follow the literature [13, 4] and directly use the leave-one-domain-out cross-validation to select their values. ... Alex Net and Res Net are pre-trained by Image Net [37] for all the experiments. |