Sparse Multi-Channel Variational Autoencoder for the Joint Analysis of Heterogeneous Data
Authors: Luigi Antelmi, Nicholas Ayache, Philippe Robert, Marco Lorenzi
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic data show that our model correctly identifies the prescribed latent dimensions and data relationships across multiple testing scenarios. When applied to imaging and clinical data, our method allows to identify the joint effect of age and pathology in describing clinical condition in a large scale clinical cohort. |
| Researcher Affiliation | Academia | 1University of Cˆote d Azur, Inria, Epione Project-Team, France. 2University of Cˆote d Azur, Co BTe K, France. 3Centre M emoire, CHU of Nice, France. |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code developed in Pytorch (Paszke et al., 2017) is publicly available at https://gitlab.inria.fr/epione_ML/mcvae. |
| Open Datasets | Yes | Data used in preparation of this article were obtained from the Alzheimer s Disease Neuroimaging Initiative (ADNI) database (http://adni.loni.usc.edu). |
| Dataset Splits | Yes | We randomly assigned the subjects to a training and testing set through 10-fold cross validation. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used (e.g., GPU models, CPU models, or cloud computing instance types) for running its experiments. |
| Software Dependencies | No | The paper mentions 'Code developed in Pytorch (Paszke et al., 2017)' but does not specify a version number for PyTorch or any other relevant software dependency. |
| Experiment Setup | No | The paper mentions 'multi-layer architectures were tested, ranging from 1 (linear) up to 4 layers for the encoding and decoding pathways, with a sigmoidal activation applied to all but last layer.' and 'minibatch stochastic gradient descent implemented with the backpropagation algorithm. With Adam (Kingma & Ba, 2014) we compute adaptive learning rates for the parameters.' However, it lacks specific numerical hyperparameters such as learning rates, batch sizes, or number of epochs, which are crucial for reproducibility. |