Interpolation-Prediction Networks for Irregularly Sampled Time Series
Authors: Satya Narayan Shukla, Benjamin Marlin
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We investigate the performance of this architecture on both classification and regression tasks, showing that our approach outperforms a range of baseline and recently proposed models. |
| Researcher Affiliation | Academia | Satya Narayan Shukla College of Information and Computer Sciences University of Massachusetts Amherst snshukla@cs.umass.edu Benjamin M. Marlin College of Information and Computer Sciences University of Massachusetts Amherst marlin@cs.umass.edu |
| Pseudocode | No | The paper contains mathematical equations and architectural diagrams but no explicit 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | 1Our implementation is available at : https://github.com/mlds-lab/interp-net |
| Open Datasets | Yes | We test the model framework on two publicly available real-world datasets: MIMIC-III 3 a multivariate time series dataset consisting of sparse and irregularly sampled physiological signals collected at Beth Israel Deaconess Medical Center from 2001 to 2012 (Johnson et al., 2016), and UWave Gesture 4 a univariate time series data set consisting of simple gesture patterns divided into eight categories (Liu et al., 2009). |
| Dataset Splits | Yes | For MIMIC-III, we create our own dataset (appendix A.1) and report the results of a 5-fold cross validation experiment... and Out of the training data, 30% is used for validation. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Tensor Flow' but does not specify its version number or any other software dependencies with version numbers. |
| Experiment Setup | Yes | We use cross-entropy loss for classification and squared error for regression. We also include ℓ2 regularizers for both the interpolation and prediction networks parameters. δI, δP , and δR are hyper-parameters that control the trade-off between the components of the objective function. |