Implicit Task-Driven Probability Discrepancy Measure for Unsupervised Domain Adaptation

Authors: Mao Li, Kaiqi Jiang, Xinhua Zhang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We finally validate the implicit task-driven discrepancy by comparing i-MDD and i-CDD against state-of-the-art methods for unsupervised domain adaptation, especially MDD and CDD. Ablation studies will also be carried out to examine the influence of various components. More details on the experiment setup and results are available in Appendix D.
Researcher Affiliation Academia Mao Li, Kaiqi Jiang, Xinhua Zhang Department of Computer Science, University of Illinois at Chicago Chicago, IL 60607 {mli206,kjiang10,zhangx}@uic.edu
Pseudocode No The paper includes illustrations and mathematical formulations but no pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper states, "We implemented our methods in Py Torch" but does not provide a link or explicit statement about the public availability of their source code for the methodology described.
Open Datasets Yes Datasets. We adopted three public domain datasets for UDA benchmarking. Office-31 [62] is a standard dataset for real-world domain adaptation. ... Office-Home [63] is a more challenging dataset for visual domain adaptation. ... Image CLEF-DA [64] consists of images from three domains: Caltech-256, Image Net ILSVRC 2012 and Pascal VOC 2012.
Dataset Splits Yes We followed the commonly used experimental protocol for unsupervised domain adaptation from [14]. We report the average accuracy and standard deviation of five independent runs.
Hardware Specification No The paper mentions "GPU memory" when discussing cache size, but it does not specify any particular GPU models, CPU types, or other hardware specifications used for running the experiments.
Software Dependencies No The paper mentions "We implemented our methods in Py Torch" but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes For i-MDD we mainly used the hyper-parameters from [23], i.e., the margin factor γ in (13) was chosen from {2, 3, 4} and was kept the same for all tasks on the same dataset. For i-CDD, the trade-off coefficient β between intra-class loss and inter-class loss in (14) is chosen from {0.1, 0.01, 0.001}. The cache size for each class is 30. ... The head classifier (in both i-CDD and i-MDD) and the auxiliary classifier (h in i-MDD) are both 1-layer neural network with width 1024. ... We used mini-batch SGD with Nesterov momentum 0.9. The initial learning rate was 0.004, which was adjusted according to [14]. The mini-batch size is 150 for each domain.