Domain Aggregation Networks for Multi-Source Domain Adaptation
Authors: Junfeng Wen, Russell Greiner, Dale Schuurmans
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We find that DARN can significantly outperform the state-of-the-art alternatives on multiple real-world tasks, including digit/object recognition and sentiment analysis. In this section, we demonstrate some of the key properties of DARN on a synthetic regression problem, then compare DARN to several state-of-the-art methods on multiple challenging real-world tasks. |
| Researcher Affiliation | Collaboration | 1Amii, Department of Computing Science, University of Alberta, Edmonton, Canada 2Google Brain. |
| Pseudocode | Yes | Algorithm 1 Domain Aggregation Network |
| Open Source Code | Yes | The code is available at github.com/junfengwen/DARN. |
| Open Datasets | Yes | Following previous works (Ganin et al., 2016; Zhao et al., 2018), we use the four digit recognition datasets in this experiment (MNIST, MNIST-M, SVHN and Synth). we use the challenging Office Home dataset (Venkateswara et al., 2017). We use the Amazon review dataset (Blitzer et al., 2007; Chen et al., 2012) |
| Dataset Splits | Yes | One of the four datasets is chosen as unlabelled target domain in turn and the other three are used as labelled source domains. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., Python 3.8, PyTorch 1.9) needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions general experimental parameters like 'mini-batch size B', 'optimizer learning rate > 0', and 'temperature parameter > 0' in Algorithm 1, but does not provide their specific concrete values or other detailed training configurations within the main text provided. It defers some details to Appendix C, which is not available. |