Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Dynamic Domain Generalization
Authors: Zhishu Sun, Zhifeng Shen, Luojun Lin, Yuanlong Yu, Zhifeng Yang, Shicai Yang, Weijie Chen
IJCAI 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness of our proposed method. Extensive experiments are conducted on three popular DG benchmarks, including PACS [Li et al., 2017], Office-Home [Venkateswara et al., 2017] and Domain Net [Peng et al., 2019]. |
| Researcher Affiliation | Collaboration | 1College of Computer and Data Science, Fuzhou University, Fuzhou, China 2Hikvision Research Institute, Hangzhou, China |
| Pseudocode | No | The paper describes the methodology and process flow with text and diagrams (e.g., Figure 2), but does not include a formal pseudocode block or an algorithm listing. |
| Open Source Code | Yes | Code is available: https://github.com/MetaVisionLab/DDG |
| Open Datasets | Yes | Extensive experiments are conducted on three popular DG benchmarks, including PACS [Li et al., 2017], Office-Home [Venkateswara et al., 2017] and Domain Net [Peng et al., 2019]. |
| Dataset Splits | No | The paper mentions using a "validation set" for visualization ("we randomly select an image from validation set to feed into the model"), but does not provide specific details on its size, split percentage, or how it was separated from the training data for reproducibility purposes. |
| Hardware Specification | Yes | All the experiments are conducted on RTX 3090 GPU with Py Torch 1.10.0. |
| Software Dependencies | Yes | All the experiments are conducted on RTX 3090 GPU with Py Torch 1.10.0. |
| Experiment Setup | Yes | For PACS and Office-Home, the network optimization is set with batch size of 64, training epochs of 50, and the initial learning rate of 1e-3 decayed by cosine scheduler. While training on Domain Net, most of the hyper-parameters keep the same with that of PACS, except that the initial learning rate and max epoch are 2e-3 and 15, and the mini-batches are fetched with random domain sampler strategy [Zhou et al., 2021], in order to ensure that each domain is uniformly sampled. |