Continuously Indexed Domain Adaptation
Authors: Hao Wang, Hao He, Dina Katabi
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical results show that our approach outperforms the stateof-the-art domain adaption methods on both synthetic and real-world medical datasets1. 5. Experiments We evaluate CIDA and its variants on two toy datasets, one image dataset (Rotating MNIST), and three real-world medical datasets. |
| Researcher Affiliation | Academia | 1MIT Computer Science and Artificial Intelligence Laboratory, Massachusetts, USA. |
| Pseudocode | No | No explicit pseudocode or algorithm block was found. |
| Open Source Code | No | 1Code will soon be available at https://github.com/ hehaodele/CIDA |
| Open Datasets | Yes | We further evaluate our methods on the Rotating MNIST dataset. We use three medical datasets, Sleep Heart Health Study (SHHS) (Quan et al., 1997), Multi Ethnic Study of Atherosclerosis (MESA) (Zhang et al., 2018) and Study of Osteoporotic Fractures (SOF) (Cummings et al., 1990). |
| Dataset Splits | Yes | We designate images that are Rotating by 0 to 45 as the labeled source domain, and assign images Rotating by 45 to 360 to the target domains. We use domains 1 to 6 as source domains and the rest as target domains. Domain Extrapolation. For example, the source domain has data with a domain index (age) from the range [44,52], while the target domain contains data with a domain index range of (52,90]. |
| Hardware Specification | No | No specific hardware details (GPU/CPU models, processor types, memory amounts) used for running experiments were mentioned. |
| Software Dependencies | No | All methods are implemented using Py Torch (Paszke et al., 2019) with the same neural network architecture. |
| Experiment Setup | Yes | All methods are implemented using Py Torch (Paszke et al., 2019) with the same neural network architecture. λd is chosen from {0.2, 0.5, 1.0, 2.0, 5.0} and kept the same for all tasks associated with the same dataset (see the Supplement for more details about training). |