CrossMatch: Cross-Classifier Consistency Regularization for Open-Set Single Domain Generalization
Authors: Ronghang Zhu, Sheng Li
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on benchmark datasets prove the effectiveness of Cross Match on enhancing the performance of SDG methods in the OS-SDG setting. |
| Researcher Affiliation | Academia | Ronghang Zhu, Sheng Li University of Georgia {ronghangzhu, sheng.li}@uga.edu |
| Pseudocode | Yes | A.1 ALGORITHM Algorithm 1 illustrates details of our proposed method. |
| Open Source Code | No | No statement or link regarding open-source code release was found in the paper. |
| Open Datasets | Yes | Datasets. (1) Digits comprises of five digits datasets: MNIST (Le Cun et al., 1989), SVHN (Netzer et al., 2011), USPS (Hull, 1994), MNIST-M and SYN (Ganin & Lempitsky, 2015). (2) Office31 (Saenko et al., 2010)... (3) Office-Home (Venkateswara et al., 2017)... (4) PACS (Li et al., 2017)... |
| Dataset Splits | No | No explicit details about training/validation/test dataset splits (e.g., percentages, sample counts, or clear predefined split references for reproduction) are provided within the paper. The paper mentions "Following the setting defined by (Volpi et al., 2018; Zhao et al., 2020a)" for Digits, but does not explicitly state the splits used in their own experiments. |
| Hardware Specification | No | The paper does not explicitly state the specific hardware (e.g., GPU models, CPU types, or memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using Conv Net and ResNet18 architectures, Adam and SGD optimizers, and a cosine annealing schedule, but does not provide specific version numbers for any software libraries or frameworks like PyTorch, TensorFlow, or Python. |
| Experiment Setup | Yes | Implementation Details. For Digits... The batch size is 32. We adopt Adam with learning rate β = 0.0001 for minimization stage and SGD with learning rate η = 1.0 for maximization stage. We set T = 10000, TMIN = 100, TMAX = 15, K = 3, γ = 1.0 and α = 1. Office31, Office-Home and PACS... set the batch size to 32. We use SGD with learning rate β = 0.001 which adjusted by a cosine annealing schedule (Zagoruyko & Komodakis, 2016), the momentum of 0.9 and the weight decay of 0.00005 for minimization stage. We adopt the SGD with learning η = 1.0 for maximization stage. We set T = 10000, TMIN = 100, TMAX = 15, K = 1, γ = 1.0 and α = 1. |