Graph-Relational Domain Adaptation

Authors: Zihao Xu, Hao He, Guang-He Lee, Bernie Wang, Hao Wang

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results show that our approach successfully generalizes uniform alignment, naturally incorporates domain information represented by graphs, and improves upon existing domain adaptation methods on both synthetic and real-world datasets1.
Researcher Affiliation Collaboration Zihao Xu1 , Hao He2, Guang-He Lee2, Yuyang Wang3, Hao Wang1 1Rutgers University, 2Massachusetts Institute of Technology, 3AWS AI Labs zihao.xu@rutgers.edu, {haohe, guanghe}@mit.edu, yuyawang@amazon.com, hw488@cs.rutgers.edu
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code No Code will soon be available at https://github.com/Wang-ML-Lab/GRDA
Open Datasets Yes TPT-48 contains the monthly average temperature for the 48 contiguous states in the US from 2008 to 2019. The raw data are from the National Oceanic and Atmospheric Administration s Climate Divisional Database (n Clim Div) and Gridded 5km GHCN-Daily Temperature and Precipitation Dataset (n Clim Grid) (Vose et al., 2014). We use the data processed by Washington Post (WP, 2020).
Dataset Splits No No explicit training, validation, or test dataset splits are provided in terms of percentages or absolute counts. The paper describes source and target domains, but not how data within these domains are split for train/validation/test.
Hardware Specification Yes We run all our experiments on a Tesla V100 GPU using AWS Sage Maker (Liberty et al., 2020).
Software Dependencies No All the algorithms are implemented in Py Torch (Paszke et al., 2019) and the balancing hyperparameter λd is chosen from 0.1 to 1 (see the Appendix for more details on training). ... The models are trained using the Adam (Kingma & Ba, 2015) optimizer...
Experiment Setup Yes For experiments on all 4 datasets, we choose k = 2. We use a mixture policy for sampling nodes (domains) to train GRDA s discriminator. One method is to randomly sample several nodes, and another is to pick the nodes from randomly chosen connected sub-graphs. We pick one of the policies randomly in each iteration and calculate the loss of each forward pass. The models are trained using the Adam (Kingma & Ba, 2015) optimizer with learning rates ranging from 1 10 5 to 1 10 4, and λd ranging from 0.1 to 1.