Adversarial Unsupervised Representation Learning for Activity Time-Series

Authors: Karan Aggarwal, Shafiq Joty, Luis Fernandez-Luque, Jaideep Srivastava834-841

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on four disorder prediction tasks using linear classifiers. Empirical evaluation demonstrates that our proposed method scales and performs better than many strong baselines.
Researcher Affiliation Academia 1University of Minnesota, 2Nanyang Technological University, 3Qatar Computing Research Institute
Pseudocode No No structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures) were found.
Open Source Code No No explicit statement about releasing source code or a link to a code repository for the described methodology was found.
Open Datasets Yes We use Study of Latinos (SOL) (Sorlie et al. 2010) and Multi-Ethnic Study of Atherosclerosis (MESA) (Bild et al. 2002) datasets.
Dataset Splits Yes We use 80%,10%,10% split for train, validation, and test sets repeated 10 times, and we report the mean scores.
Hardware Specification No No specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running experiments were provided.
Software Dependencies No No specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) were provided. Only general software/model types like 'Logistic Regression', 'CNN', 'LSTM', 'Adam Optimizer' are mentioned without versions.
Experiment Setup Yes The embedding size of d=100 was fixed for all the models. The weighting parameters λ and β were chosen to be 0.05 and 0.5, respectively. We tuned for w {12, 20, 30, 50, 100, 120, 500}, η {0, 0.25, 0.5, 0.75, 1}, and |N (Tk)| {2, 4} on the development set. We chose w of size 20, 20, 30, and 50 for sample2vec, hour2vec, day2vec, and week2vec, respectively. The η of 0.25 and 0.5 were chosen for day2vec and hour2vec, respectively. The neighbor set size of 2 was chosen. For the CNN baseline, 3, 4, 3, and 3-layered network were used for sleepapnea, diabetes, insomnia, and hypertension, with a dropout of 0.5 trained with Adam Optimizer.