Dynamic Domain Generalization
Authors: Zhishu Sun, Zhifeng Shen, Luojun Lin, Yuanlong Yu, Zhifeng Yang, Shicai Yang, Weijie Chen
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness of our proposed method. Extensive experiments are conducted on three popular DG benchmarks, including PACS [Li et al., 2017], Office-Home [Venkateswara et al., 2017] and Domain Net [Peng et al., 2019]. |
| Researcher Affiliation | Collaboration | 1College of Computer and Data Science, Fuzhou University, Fuzhou, China 2Hikvision Research Institute, Hangzhou, China |
| Pseudocode | No | The paper describes the methodology and process flow with text and diagrams (e.g., Figure 2), but does not include a formal pseudocode block or an algorithm listing. |
| Open Source Code | Yes | Code is available: https://github.com/MetaVisionLab/DDG |
| Open Datasets | Yes | Extensive experiments are conducted on three popular DG benchmarks, including PACS [Li et al., 2017], Office-Home [Venkateswara et al., 2017] and Domain Net [Peng et al., 2019]. |
| Dataset Splits | No | The paper mentions using a "validation set" for visualization ("we randomly select an image from validation set to feed into the model"), but does not provide specific details on its size, split percentage, or how it was separated from the training data for reproducibility purposes. |
| Hardware Specification | Yes | All the experiments are conducted on RTX 3090 GPU with Py Torch 1.10.0. |
| Software Dependencies | Yes | All the experiments are conducted on RTX 3090 GPU with Py Torch 1.10.0. |
| Experiment Setup | Yes | For PACS and Office-Home, the network optimization is set with batch size of 64, training epochs of 50, and the initial learning rate of 1e-3 decayed by cosine scheduler. While training on Domain Net, most of the hyper-parameters keep the same with that of PACS, except that the initial learning rate and max epoch are 2e-3 and 15, and the mini-batches are fetched with random domain sampler strategy [Zhou et al., 2021], in order to ensure that each domain is uniformly sampled. |