PrimeNet: Pre-training for Irregular Multivariate Time Series
Authors: Ranak Roy Chowdhury, Jiacheng Li, Xiyuan Zhang, Dezhi Hong, Rajesh K. Gupta, Jingbo Shang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiment results show that Prime Net significantly outperforms state-of-the-art methods on naturally irregular and asynchronous data from Healthcare and Io T applications for several downstream tasks, including classification, interpolation, and regression. Experiment results show that Prime Net significantly outperforms all baselines on all datasets for all downstream tasks, under both few-shot and full training data settings. |
| Researcher Affiliation | Collaboration | Ranak Roy Chowdhury1, Jiacheng Li1, Xiyuan Zhang1, Dezhi Hong2, Rajesh K. Gupta1, Jingbo Shang1 1 University of California, San Diego 2 Amazon {rrchowdh, j9li, gupta, jshang}@eng.ucsd.edu, {xiyuanzh}@ucsd.edu, hondezhi@amazon.com |
| Pseudocode | Yes | Algorithm 1: Time CL Data Augmentation; Algorithm 2: Time Reco Data Augmentation |
| Open Source Code | Yes | Reproducibility Code is publicly available at https://github.com/ranakroychowdhury/Prime Net |
| Open Datasets | Yes | Datasets Physio Net Challenge 2012 (Silva et al. 2012) and MIMIC-III (Johnson et al. 2016) are multivariate time series datasets... Activity (Kaluˇza et al. 2010) dataset has 3-D positions... Appliances Energy (Tan et al. 2021) dataset contains... |
| Dataset Splits | No | During pretraining, we measure contrastive learning classification (i.e. how many samples are predicted correctly among the 2B sub-samples) and use the validation accuracy for early stopping. While validation is mentioned, specific details about dataset splits (e.g., percentages or exact counts for training, validation, and testing) are not provided. |
| Hardware Specification | No | The paper mentions 'efficient GPU implementation' but does not specify any particular GPU models, CPU models, or other hardware specifications used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., specific libraries, frameworks, or programming language versions like 'PyTorch 1.9' or 'Python 3.8'). |
| Experiment Setup | Yes | We compute Cross-Entropy Loss for classification and Root Mean Squared Error (RMSE) for regression and interpolation. We conduct grid search on hyper-parameters, η = (0.3, 0.4, 0.5, 0.6, 0.7), α = (0.15, 0.05, 0.03), J = (1, 3, 5), µl, λl = (0.3, 0.4) and µu, λu = (0.7, 0.6) to report test results based on the best held-out validation performance. Best values for η = 0.5, 0.6, 0.5, 0.5 for Physio Net, MIMIC-III, Activity, Appliances Energy, respectively. |