Mapping conditional distributions for domain adaptation under generalized target shift
Authors: Matthieu Kirchmeyer, Alain Rakotomamonjy, Emmanuel de Bezenac, patrick gallinari
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through an exhaustive comparison on several datasets, we challenge the state-of-the-art in Ge Tar S. |
| Researcher Affiliation | Collaboration | 1CNRS-ISIR, Sorbonne University; 2Criteo AI Lab; 3Universit e de Rouen, LITIS |
| Pseudocode | Yes | Our pseudo-code and runtime / complexity analysis are presented in Appendix F. |
| Open Source Code | Yes | Our source code is available at https://github.com/mkirchmeyer/ostar. |
| Open Datasets | Yes | The datasets used in our experiments are public benchmarks and we provide a complete description of the data processing steps and NN architectures in Appendix G. |
| Dataset Splits | Yes | We subsample our datasets to make label proportions dissimilar across domains as detailed in Appendix Table 8. For Digits, we subsample the target domain and investigate three settings balanced, mild and high as Rakotomamonjy et al. (2021). For other datasets, we modify the source domain by considering 30% of the samples coming from the first half of their classes as Combes et al. (2020). |
| Hardware Specification | Yes | In practise on USPS MNIST, the runtimes in seconds on a NVIDIA Tesla V100 GPU machine are the following: DANN: 22.75s, WDβ=0: 59.25s... |
| Software Dependencies | No | The paper mentions software like 'cvxopt', 'POT', and 'sklearn' but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | Batch size is Nb = 200 and all models are trained using Adam with learning rate tuned in the range [10 4, 10 3]. We initialize NN for classifiers and feature extractors with a normal prior with zero mean and gain 0.02 and φ with orthogonal initialization with gain 0.02. |