Latent variable model for high-dimensional point process with structured missingness
Authors: Maksim Sinelnikov, Manuel Haussmann, Harri Lähdesmäki
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate competitive performance using both simulated and real datasets." and section "4. Experiments" with sub-sections for different datasets and performance tables. |
| Researcher Affiliation | Academia | 1Department of Computer Science, Aalto University, Espoo, Finland 2Department of Mathematics and Computer Science, University of Southern Denmark, Odense, Denmark. |
| Pseudocode | No | The paper describes the model mathematically and textually but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | An implementation of our proposed methods is available at https://github.com/sinelnikovmaxim/MPP-VAE. |
| Open Datasets | Yes | We used a modified version of the MNIST dataset (Le Cun et al., 1998) called Health MNIST (Krishnan et al., 2015)." and "We evaluated our model on healthcare data from the 2012 Physionet Challenge (Silva et al., 2012). |
| Dataset Splits | Yes | We selected 2000 patients for training, 1917 for validation and performed future prediction on 100 patients, not included in training and validation sets. |
| Hardware Specification | No | The paper only mentions 'computational resources provided by the Aalto Science-IT project' in the acknowledgements, without specifying any particular hardware components like CPU or GPU models. |
| Software Dependencies | No | The paper mentions 'Adam optimiser (Kingma & Ba, 2015) as implemented in Pytorch (Paszke et al., 2019)' but does not provide specific version numbers for PyTorch or other libraries. |
| Experiment Setup | Yes | We also selected sixty inducing points for each GP model for all setups and chose the latent dimension to be 32." "For both variants, we used the Adam optimiser (Kingma & Ba, 2015) as implemented in Pytorch (Paszke et al., 2019), with a learning rate equal to 0.001, which was selected based on cross-validation. After having pretrained a standard VAE, we trained both LLSM and LLPPSM on 1000 epochs, employing early stopping." Tables 9 and 10 provide detailed neural network architectures. |