Domain Agnostic Learning with Disentangled Representations
Authors: Xingchao Peng, Zijun Huang, Ximeng Sun, Kate Saenko
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate experimentally that when the target domain labels are unknown, DADA leads to stateof-the-art performance on several image classification datasets. Comprehensive experiments on standard image recognition datasets demonstrate that our derived disentangled representation achieves significant improvements over the state-of-the-art methods on the task of domain-agnostic learning. |
| Researcher Affiliation | Collaboration | 1Computer Science Department, Boston University; 111 Cummington Mall, Boston, MA 02215, USA; email:xpeng@bu.edu 2Columbia Unversity and MADO AI Research; 116th St and Broadway, New York, NY 10027, USA; email:zijun.huang@columbia.edu. |
| Pseudocode | Yes | Algorithm 1 Learning algorithm for DADA |
| Open Source Code | No | All of our experiments are implemented in the Py Torch1 platform. 1http://pytorch.org |
| Open Datasets | Yes | We compare the DADA model to state-of-the-art domain adaptation algorithms on the following tasks: digit classification (MNIST, SVHN, USPS, MNIST-M, Synthetic Digits) and image recognition (Office-Caltech10 (Gong et al., 2012), Domain Net (Peng et al., 2018)). Sample images of these datasets can be seen in Figure 2. Table 6 (suppementary material) shows the detailed number of images we use in our experiments. 3http://ai.bu.edu/M3SDA/ |
| Dataset Splits | No | The paper refers to "training and testing data" and "source domain" and "target domain" but does not explicitly provide specific percentages, sample counts, or methods for splitting data into training, validation, and test sets. It mentions that more implementation details are in the supplementary material, which is not provided. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It only refers to general neural network concepts like 'deep CNNs'. |
| Software Dependencies | No | All of our experiments are implemented in the Py Torch1 platform. 1http://pytorch.org |
| Experiment Setup | Yes | In the optimization procedure, we set the learning rate of randomly initialized parameters ten times of the pretrained parameters. Other components are randomly initialized with normal distribution. |