BasisDeVAE: Interpretable Simultaneous Dimensionality Reduction and Feature-Level Clustering with Derivative-Based Variational Autoencoders

Authors: Dominic Danks, Christopher Yau

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the performance and scalability of the De VAE and Basis De VAE models on synthetic and real-world data and present how the derivative-based approach allows for expressive yet interpretable forward models which respect prior knowledge. 5. Experiments
Researcher Affiliation Academia 1Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham, UK 2The Alan Turing Institute, London, UK 3Division of Informatics, Imaging & Data Sciences, Unversity of Manchester, Manchester, UK 4Health Data Research UK, London, UK.
Pseudocode No The paper does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes We provide a GPU-aware Py Torch (Paszke et al., 2019) implementation of our approach at https: //github.com/djdanks/Basis De VAE and demonstrate its application to multiple settings in Section 5.
Open Datasets Yes We use OASIS-3 (La Montagne et al., 2019) which is the latest iteration of released data and contains entries from over 2,000 MRI sessions of patients at various stages of cognitive decline. analysing a single-cell RNA sequencing (sc RNA-seq) mouse spermatogenesis dataset (Ernst et al., 2019)
Dataset Splits No Each iteration consists of i) randomly partitioning the data into an 80%/20% train/test split, ii) training on the training data and iii) evaluating two metrics on the test data. We train on a randomly sampled 90% portion of the data and reserve the remaining 10% for test-set evaluation. No explicit validation set is mentioned for either experiment.
Hardware Specification Yes All computations were performed on a Linux (Ubuntu) desktop with an Intel i7-4790K 4GHz CPU and NVIDIA GTX 980 GPU (4GB VRAM).
Software Dependencies No The paper mentions using 'Py Torch' but does not provide a specific version number for it or any other key software dependencies.
Experiment Setup Yes Each model is trained for 50 epochs using Adam (Kingma & Ba, 2015) with a 5 10 3 learning rate and employs a Gaussian conditional log-likelihood in the decoder. In both models we use β = 10, γ = 1, α = 0.1 and optimise using Adam with a 5 10 3 learning rate. We apply linear KL-annealing (Bowman et al., 2016b) to (β, γ) over the first 20% of 100 training epochs.