Learning Conditional Invariances through Non-Commutativity
Authors: Abhra Chaudhuri, Serban Georgescu, Anjan Dutta
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments with NCI on three standard domain adaptation benchmarks, namely, PACS (Li et al., 2017), Office-Home (Venkateswara et al., 2017), and Domain Net (Peng et al., 2019). We evaluate NCI on the task of multi-source domain adaptation (Zhao et al., 2018) experiments with complementary semantics across domains. We report the performance of NCI compared to existing SOTA invariance learning algorithms in Table 1 (PACS and Office-Home) and Table 4 (Domain Net). |
| Researcher Affiliation | Collaboration | Abhra Chaudhuri1,2,3 Serban Georgescu 2 Anjan Dutta3 1 University of Exeter 2 Fujitsu Research of Europe 3 University of Surrey |
| Pseudocode | No | The paper describes the training procedure in paragraph form in Section 3.4 'TRAINING WITH NCI', but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Implementation is available at https://github.com/abhrac/nci. |
| Open Datasets | Yes | We perform experiments with NCI on three standard domain adaptation benchmarks, namely, PACS (Li et al., 2017), Office-Home (Venkateswara et al., 2017), and Domain Net (Peng et al., 2019). |
| Dataset Splits | No | The paper describes data sharing percentages between source and target domains (e.g., 'Specifically for PACS and Office Home, 70% of the sample supports are shared between both sources and targets... For Domain Net, 75% of the instances are shared across domains...'), but it does not provide explicit training, validation, or test dataset splits (e.g., '80/10/10 split'). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory, or cloud computing specifications used for running the experiments. It only mentions using settings based on 'Domain Bed (Gulrajani & Lopez-Paz, 2021)'. |
| Software Dependencies | No | The paper states 'All hyperparameters and experimental settings based on the Domain Bed (Gulrajani & Lopez-Paz, 2021) version of DANN', but it does not list specific software dependencies with version numbers, such as programming language versions or library versions (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | No | The paper states that 'all hyperparameters and experimental settings based on the Domain Bed (Gulrajani & Lopez-Paz, 2021) version of DANN' are used, but it does not explicitly provide concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific training configurations within the main text. |