Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment

Authors: Yifan Wu, Ezra Winston, Divyansh Kaushik, Zachary Lipton

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6. Experiments To evaluate our approach, we implement Domain Adversarial Neural Networks (DANN), (Ganin et al., 2016) replacing the JS-divergence (domain classifier) with our proposed β-admissible distances (Section 5). Our experiments address the following questions: (i) Does DANN suffer the limitation as anticipated (Section 3) when faced with label distribution shift? (ii) If so, do our β-admissible distances overcome these limitations? (iii) Absent shifting label distributions, is our approach comparable to DANN? ... Synthetic datasets We create a mixture-of-Gaussians binary classification dataset... Table 1 shows the target domain accuracy for different approaches. ... Real datasets We experiment with the MNIST and USPS handwritten-digit datasets. ... Tables 2 and 3 show the target domain accuracy for different approaches with/without label distribution shift.
Researcher Affiliation Academia Yifan Wu 1 Ezra Winston 1 Divyansh Kaushik 1 Zachary C. Lipton 1 ... 1Carnegie Mellon University. Correspondence to: Yifan Wu <yw4@andrew.cmu.edu>.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code, such as a specific repository link, an explicit code release statement, or mention of code in supplementary materials.
Open Datasets Yes Real datasets We experiment with the MNIST and USPS handwritten-digit datasets.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning into train/validation/test sets. It mentions 'target domain accuracy' but not how the data was split for training or validation.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, PyTorch 1.9) needed to replicate the experiment.
Experiment Setup No The paper mentions that 'We always perform adversarial training with alternating minimization (see Appendix for details)' regarding the experimental setup. However, it does not provide concrete hyperparameter values or training configurations within the main body of the paper. These details are deferred to the Appendix.