Domain-Indexing Variational Bayes: Interpretable Domain Index for Domain Adaptation
Authors: Zihao Xu, Guang-Yuan Hao, Hao He, Hao Wang
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on both synthetic and real data verify that our model can produce interpretable domain indices which enable us to achieve superior performance compared to state-of-the-art domain adaptation methods. In this section, we compare VDI with existing DA methods on both synthetic and real-world datasets. |
| Researcher Affiliation | Academia | 1Rutgers University, 2Hong Kong University of Science and Technology, 3Massachusetts Institute of Technology |
| Pseudocode | No | The paper does not contain any sections or figures explicitly labeled "Pseudocode" or "Algorithm". |
| Open Source Code | Yes | Code is available at https://github.com/Wang-ML-Lab/VDI. |
| Open Datasets | Yes | Circle (Wang et al., 2020) is a synthetic dataset with 30 domains for binary classification. DG-15 and DG-16 (Xu et al., 2022). TPT-48 (Xu et al., 2022) is a real-world regression dataset... Comp Cars (Yang et al., 2015) is a car image dataset... |
| Dataset Splits | No | The paper mentions using source and target domains but does not explicitly state train/validation/test dataset splits with specific percentages or counts needed for reproduction. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., CPU, GPU models, or cloud computing instances) used for running the experiments. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers (e.g., 'Python 3.x', 'PyTorch 1.x'). |
| Experiment Setup | Yes | For experiments on all 4 datasets, we set the dimension of global domain indices to 2. For Circle, DG-15, DG-60, the dimension of local domain indices is 4, while for TPT-48 and Comp Cars, the dimension of local domain indices is 8. Our model is trained with 20 to 70 warmup steps, learning rates ranging from 1 10 5 to 1 10 4, and λd ranging from 0.1 to 1. |