Modelling Cellular Perturbations with the Sparse Additive Mechanism Shift Variational Autoencoder
Authors: Michael Bereket, Theofanis Karaletsos
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate SAMS-VAE both quantitatively and qualitatively on a range of tasks using two popular single cell sequencing datasets. |
| Researcher Affiliation | Collaboration | Michael Bereket insitro mbereket@stanford.edu Theofanis Karaletsos insitro* theofanis@karaletsos.com Research supporting this publication conducted while authors were employed at insitro |
| Pseudocode | Yes | Algorithm 1 SAMS-VAE generative process |
| Open Source Code | Yes | Code availability Our code, which includes implementations of all models and experiment configurations, is available at https://github.com/insitro/sams-vae. |
| Open Datasets | Yes | Dataset To assess model generalization to held out samples under individual perturbations, we analyze a subset of the genome-wide CRISPR interference (CRISPRi) perturb-seq dataset from Replogle et al. [17]... We analyze the CRISPR activation (CRISPRa) perturb-seq screen from Norman et al. [15]... |
| Dataset Splits | Yes | We randomly sample train, validation, and test splits. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory, or cloud instance types used for experiments. |
| Software Dependencies | No | The paper mentions `scikit-learn [16]` but does not provide specific version numbers for software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | Each model is trained with a 100 dimensional latent space and MLP encoders and decoders with a single hidden layer of dimension 400 (see Section A.4 for full training details). Based on validation performance and sparsity, a Beta(1, 2) prior was selected for the SVAE+ mask, and a Bern(0.001) prior was selected for SAMS-VAE. |