Learning from Irregularly-Sampled Time Series: A Missing Data Perspective

Authors: Steven Cheng-Xian Li, Benjamin Marlin

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our models are able to achieve competitive or better classification results on irregularlysampled multivariate time series compared to recent RNN models while offering significantly faster training times. In this section, we first evaluate the models on the finite index set case described in Section 2. We assess our framework using image modeling and completion experiments with controlled missingness on standard image benchmarks.
Researcher Affiliation Academia 1University of Massachusetts Amherst.
Pseudocode No The paper describes generative processes and model structures but does not include formal pseudocode blocks or algorithms labeled as such.
Open Source Code Yes Our implementation is available at https://github.com/steveli/partial-encoder-decoder.
Open Datasets Yes We follow the experimental setup of Mis GAN to quantitatively evaluate the performance of PVAE and P-Bi GAN on the imputation task using two image benchmarks, MNIST (Le Cun et al., 2010) and Celeb A (Liu et al., 2015). In this section, we evaluate our framework on a healthcare multivariate time series dataset, MIMIC-III (Johnson et al., 2016), using the mortality prediction task.
Dataset Splits Yes We split the data into 64% for training, 16% for validation, and the remaining 20% for testing.
Hardware Specification No The paper provides per-epoch running times and model sizes in Table 1 and Table 2, but it does not specify the hardware (e.g., GPU/CPU models, memory) used for these experiments.
Software Dependencies No The paper mentions various models and techniques like 'variational autoencoders', 'generative adversarial networks', 'inverse autoregressive flows', but it does not specify software versions (e.g., Python 3.x, PyTorch 1.x, TensorFlow 2.x).
Experiment Setup Yes For P-VAE, we use an encoder qφ(z|x, t) constructed by z0 N(µφ(m(x, t)), Σφ(m(x, t))), z = IAFφ(z0) (9) using convolutional µφ and Σφ. Here we use two-layers of inverse autoregressive flow (IAF) (Kingma et al., 2016). In addition, we use importance weighted autoencoders (IWAEs) with five importance weights. For the decoder, we use 128 evenly-spaced references over [0, 1] for the kernel smoother, whose values are the output of a standard CNN decoder. We use the Epanechnikov kernel with the kernel bandwidth set to 3/128. For the continuous convolutional layer in the encoder, we use 64 output channels with 98 evenly-spaced references. The piecewise-linear convolutional kernel has width 2/98 with 7 knots. For Cont P-VAE we use 8 importance weights for the IWAE objective.