Domain Generalization with MixStyle
Authors: Kaiyang Zhou, Yongxin Yang, Yu Qiao, Tao Xiang
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate the effectiveness as well as the general applicability of Mix Style, we conduct extensive experiments on a wide spectrum of datasets covering category classification (Sec. 3.1), instance retrieval (Sec. 3.2), and reinforcement learning (Sec. 3.3). |
| Researcher Affiliation | Academia | Kaiyang Zhou1, Yongxin Yang1, Yu Qiao2, Tao Xiang1 1University of Surrey, UK 2Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China k.zhou.vision@gmail.com {yongxin.yang, t.xiang}@surrey.ac.uk yu.qiao@siat.ac.cn |
| Pseudocode | Yes | A.1 PSEUDO-CODE OF MIXSTYLE Algorithm 1 provides a Py Torch-like pseudo-code. |
| Open Source Code | Yes | 1Source code can be found at https://github.com/Kaiyang Zhou/mixstyle-release. |
| Open Datasets | Yes | We choose the PACS dataset (Li et al., 2017), a commonly used domain generalization (DG) benchmark... Two commonly used re-ID datasets are adopted: Market1501 (Zheng et al., 2015) and Duke (Ristani et al., 2016; Zheng et al., 2017)... We conduct experiments on Coinrun (Cobbe et al., 2019)... Digits-DG (Zhou et al., 2020a) and Office-Home (Venkateswara et al., 2017). |
| Dataset Splits | Yes | For evaluation, a model is trained on three domains and tested on the remaining one. ... we report the test accuracy on the held-out validation set of the source domains on PACS in Table 7. |
| Hardware Specification | No | The paper mentions training models and conducting experiments but does not provide specific details about the hardware used, such as CPU or GPU models, memory, or other specifications. |
| Software Dependencies | No | The paper refers to using "Py Torch-like pseudo-code" and states that their "code is based on Dassl.pytorch" and "Torchreid" frameworks, and "built on top of Igl et al. (2019)". However, it does not specify version numbers for PyTorch or any of the mentioned frameworks/libraries, which are necessary for reproducible software dependencies. |
| Experiment Setup | Yes | Unless specified otherwise, we set α to 0.1 throughout this paper. ... In practice, we use a probability of 0.5 to decide if Mix Style is activated or not in the forward pass. ... we use Res Net-18 (He et al., 2016) as the classifier where Mix Style is inserted after the 1st, 2nd and 3rd residual blocks. |