CODA: Generalizing to Open and Unseen Domains with Compaction and Disambiguation

Authors: Chaoqi Chen, Luyao Tang, Yue Huang, Xiaoguang Han, Yizhou Yu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on four standard DG benchmarks to verify the effectiveness of CODA. (1) PACS [28] has 9,991 images and presents remarkable distinctions in image styles. (2) Office-Home [63] is gathered from both office and home environments... Our results are summarized in Table 2. For each dataset, CODA outperforms all compared methods by a considerable margin in terms of hs. Ablations of key components in CODA. We carry out ablation studies in Table 3, evaluating the effect of source compaction (SC) and target disambiguation (TD) proposed in CODA.
Researcher Affiliation Academia Chaoqi Chen1 Luyao Tang2 Yue Huang2 Xiaoguang Han3 Yizhou Yu1 1The University of Hong Kong 2Xiamen University 3The Chinese University of Hong Kong (Shenzhen)
Pseudocode No The paper describes the method using text and mathematical equations but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes We conduct extensive experiments on four standard DG benchmarks to verify the effectiveness of CODA. (1) PACS [28] has 9,991 images... (2) Office-Home [63] is gathered from both office and home environments... (3) Office-31 [48] encompasses 31 classes... (4) Digits, a dataset varying in background, style, and color, encompasses four domains of handwritten digits, including MNIST[26], MNISTM[17], SVHN[42], USPS[22], and SYN [17].
Dataset Splits No Problem setup. Let us formally define the OTDG problem. We have access to a source domain Ds = {(xi s, yi s)}ns i=1 of ns labeled data points and multiple unseen target domains Dt = {(xj t)}nt j=1 of nt unlabeled data points. Our experiments are built upon Dassl [87] (a Py Torch toolbox developed for DG), covering aspects of data preparation, model training, and model selection. However, specific percentages or counts for training, validation, and test splits are not explicitly provided.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No Our experiments are built upon Dassl [87] (a Py Torch toolbox developed for DG)... The paper mentions using 'Dassl' and 'Py Torch' but does not specify their version numbers or the versions of any other key software components.
Experiment Setup Yes Implementation Details. For PACS, Office-Home, and Office-31, we employ Res Net-18 [19], pre-trained on Image Net, as the backbone network. For Digits, we employ the Le Net [25] with the architecture arranged as conv-pool-conv-pool-fc-fc-softmax. The training is performed using SGD with a momentum of 0.9 for 100 epochs, and we set the batch size to 64.