Partial disentanglement for domain adaptation
Authors: Lingjing Kong, Shaoan Xie, Weiran Yao, Yujia Zheng, Guangyi Chen, Petar Stojanov, Victor Akinwande, Kun Zhang
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 6. Experiments on Synthetic Data, 7. Experiments on Real-world Data |
| Researcher Affiliation | Academia | 1Carnegie Mellon University, USA 2Mohamed bin Zayed University of Artificial Intelligence, UAE 3Broad Institute of MIT and Harvard, USA. |
| Pseudocode | Yes | Algorithm 1 Training iMSDA |
| Open Source Code | No | The paper does not provide any explicit statements about the availability of source code or a link to a code repository. |
| Open Datasets | Yes | PACS (Li et al., 2017) is a multi-domain dataset containing 9991 images from 4 domains of different styles: Photo, Artpainting, Cartoon, Sketch. These domains share the same seven categories. Office-Home (Venkateswara et al., 2017) dataset consists of 4 domains, with each domain containing images from 65 categories of everyday objects and a total of around 15,500 images. |
| Dataset Splits | No | The paper mentions 'labeled training data, and the unlabeled test data' and using 'labeled observations' for source domains and 'unlabeled instances' for target domains but does not provide specific percentages or counts for training, validation, and test splits for the benchmark datasets used. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware used for running its experiments, such as GPU or CPU models, or memory specifications. |
| Software Dependencies | No | The paper mentions using ResNet-18, ResNet-50, SGD, and Deep Sigmoidal Flow but does not provide specific version numbers for these software components or any other libraries/frameworks. |
| Experiment Setup | Yes | We apply Adam W to train VAE and flow models for 100 epochs. We use a learning rate of 0.002 with a batch size of 128. The weight decay parameter of Adam W is set to 0.0001. For VAE training, we set the β parameter of KL loss term to 0.1.For two datasets, we use SGD with Nesterov momentum with learning rate 0.01. ... For hyper-parameters, we fix α1 = 0.1 for all our experiments and select α2 in [1e-5, 5e-5, 1e-4, 5e-4]. |