SOM-VAE: Interpretable Discrete Representation Learning on Time Series
Authors: Vincent Fortuin, Matthias Hüser, Francesco Locatello, Heiko Strathmann, Gunnar Rätsch
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our model in terms of clustering performance and interpretability on static (Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the e ICU data set. |
| Researcher Affiliation | Academia | Vincent Fortuin, Matthias Hüser & Francesco Locatello Department of Computer Science, ETH Zürich Universitätsstrasse 6, 8092 Zürich, Switzerland {fortuin, mhueser, locatelf}@inf.ethz.ch Heiko Strathmann Gatsby Unit, University College London 25 Howland Street, London W1T 4JG, United Kingdom heiko.strathmann@gmail.com Gunnar Rätsch Department of Computer Science, ETH Zürich Universitätsstrasse 6, 8092 Zürich, Switzerland raetsch@inf.ethz.ch |
| Pseudocode | Yes | Algorithm 1 Self-organizing map training |
| Open Source Code | Yes | Our code is available at https://github.com/ratschlab/SOM-VAE. |
| Open Datasets | Yes | We performed experiments on MNIST handwritten digits (Le Cun et al., 1998), Fashion-MNIST images of clothing (Xiao et al., 2017), synthetic time series of linear interpolations of those images, time series from a chaotic dynamical system and real world medical data from the e ICU Collaborative Research Database (Goldberger et al., 2000). |
| Dataset Splits | No | The paper mentions training and test sets for the e ICU data (7000 unique patient stays for training, 3600 for testing) but does not explicitly detail training, validation, and test dataset splits for all experiments, nor does it specify exact percentages or sample counts for all datasets used. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions software like TensorFlow, Adam, sklearn, minisom, sacred, labwatch, and Ro Bo, but it does not specify exact version numbers for these software components. |
| Experiment Setup | No | The paper mentions that hyperparameters (α, β, γ, τ) were optimized using Robust Bayesian Optimization, but it does not provide the specific numerical values of these or other experimental setup details such as learning rate, batch size, or optimizer settings in the main text or appendix. |