Variational Recurrent Adversarial Deep Domain Adaptation

Authors: Sanjay Purushotham, Wilka Carvalho, Tanachat Nilanon, Yan Liu

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through experiments on real-world multivariate healthcare time-series datasets, we empirically demonstrate that learning temporal dependencies helps our model s ability to create domain-invariant representations, allowing our model to outperform current state-of-the-art deep domain adaptation approaches.
Researcher Affiliation Academia Sanjay Purushotham*, Wilka Carvalho*, Tanachat Nilanon, Yan Liu Department of Computer Science University of Southern California Los Angeles, CA 90089, USA {spurusho,wcarvalh,nilanon,yanliu.cs}@usc.edu
Pseudocode No The paper provides mathematical formulations and block diagrams of the model but does not include explicit pseudocode or algorithm blocks.
Open Source Code No Footnote [3] on page 4 states: 'Codes will be publicly released soon', indicating future availability rather than current.
Open Datasets Yes MIMIC-III( Johnson et al. (2016)) is a public dataset with deidentified clinical care data collected at Beth Israel Deaconess Medical Center from 2001 to 2012. ... Child-AHRF dataset: This is a PICU dataset which contains health records of 398 children patient with acute hypoxemic respiratory failure in the intensive care unit at Children s Hospital Los Angeles (CHLA)(Khemani et al. (2009)).
Dataset Splits Yes Source domain data was split into train/validation subsets with a 70/30 ratio and target domain data into train/validation/test subsets with a 70/15/15 ratio.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running the experiments.
Software Dependencies No The paper mentions using the 'Adam optimizer ( Kingma & Ba (2014))' but does not provide specific version numbers for other key software components, libraries, or programming languages.
Experiment Setup Yes All the deep domain adaptation models including ours had depth of size 8 (including output classifier layers). We used the Adam optimizer ( Kingma & Ba (2014)) and ran all models for 500 epochs with a learning rate of 3e 4. We set an early stopping criteria that the model does not experience a decrease in the validation loss for 20 epochs.