Unsupervised Domain Adaptation With Distribution Matching Machines

Authors: Yue Cao, Mingsheng Long, Jianmin Wang

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments validate that the DMM approach significantly outperforms competitive methods on standard domain adaptation benchmarks. We perform extensive experiments to evaluate DMM against state of the art methods on standard domain adaptation benchmarks including both image and text datasets.
Researcher Affiliation Academia Yue Cao, Mingsheng Long, Jianmin Wang KLiss, MOE; NEL-BDS; TNList; School of Software, Tsinghua University, China caoyue10@gmail.com mingsheng@tsinghua.edu.cn jimwang@tsinghua.edu.cn
Pseudocode No The paper has a section titled 'Algorithm' but it describes the optimization paradigm and steps in paragraph form and mathematical equations, not as a structured pseudocode block or a clearly labeled algorithm figure.
Open Source Code No Codes, datasets and configurations will be made available online.
Open Datasets Yes Datasets Office-31 (Saenko et al. 2010) is the standard benchmark for domain adaptation... Caltech-256 (Griffin, Holub, and Perona 2007) is a standard database... In experiments, we adopt Office-10 + Caltech-10 (Fernando et al. 2013; Long et al. 2013)... Reuters-21578 is a text dataset of Reuters news articles.
Dataset Splits No We use all source examples with labels and all target examples without labels for training, and report the average classification accuracy. For all comparison methods, we select their optimal hyper-parameters by cross-validation on labeled source data as (Pan et al. 2011).
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory specifications, or cloud resources) used to run the experiments.
Software Dependencies No The paper mentions using the 'LIBSVM package' and 'quadprog in MATLAB' for solving optimization problems, but it does not specify any version numbers for these software components or any other libraries.
Experiment Setup Yes We give parameter sensitivity analysis for DMM, which will validate that DMM can achieve stable performance for a wide range of hyper-parameter settings. We check sensitivity of subspace dimension r and penalty parameter λ. Figures 3(b) and 3(c) show DMM outperforms baselines for wide ranges of parameters r [10, 70], λ [10 6, 10 4].