Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift

Authors: Remi Tachet des Combes, Han Zhao, Yu-Xiang Wang, Geoffrey J. Gordon

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our algorithms outperform the base versions, with vast improvements for large label distribution mismatches. Our code is available at https://tinyurl.com/y585xt6j. ... We apply our variants to artificial UDA tasks with large divergences between label distributions, and demonstrate significant performance gains compared to the algorithms base versions. Finally, we evaluate them on standard domain adaptation tasks and also show improved performance.
Researcher Affiliation Collaboration Remi Tachet des Combes Microsoft Research Montreal Montreal, QC, Canada retachet@microsoft.com Han Zhao D. E. Shaw & Co. New York, NY, USA han.zhao@cs.cmu.edu Yu-Xiang Wang UC Santa Barbara Santa Barbara, CA, USA yuxiangw@cs.ucsb.edu Geoff Gordon Microsoft Research Montreal Montreal, QC, Canada ggordon@microsoft.com
Pseudocode Yes Algorithm 1 Importance-Weighted Domain Adaptation
Open Source Code Yes Our code is available at https://tinyurl.com/y585xt6j.
Open Datasets Yes We apply our three base algorithms, their importance weighted versions, and the oracles to 4 standard DA datasets generating 21 tasks: Digits (MNIST USPS [18, 29]), Visda [51], Office-31 [45] and Office-Home [50].
Dataset Splits No The paper mentions evaluating "best test accuracy throughout training" but does not explicitly detail train/validation/test splits or cross-validation strategy in the provided text.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes Our results all use a factor λ = 0.5. ... For full details, see App. ?? and ??.