Information Constraints on Auto-Encoding Variational Bayes

Authors: Romain Lopez, Jeffrey Regier, Michael I. Jordan, Nir Yosef

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that our method outperforms the state-of-the-art approach in this domain.Results are reported in Figure 2. ... We report our results in Table 1. ... We report our results in Table 2.
Researcher Affiliation Academia 1Department of Electrical Engineering and Computer Sciences, University of California, Berkeley 2Department of Statistics, University of California, Berkeley 3Ragon Institute of MGH, MIT and Harvard 4Chan-Zuckerberg Biohub
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the methodology is openly available.
Open Datasets Yes Dataset The extended Yale B dataset [27] contains cropped faces [28] of 38 people under 50 lighting conditions. and Dataset We considered sc RNA-seq data from peripheral blood mononuclear cells (PBMCs) from a healthy donor [34].
Dataset Splits No The paper mentions using a 'test set' and that 'λ is optimized via grid search', which implies a validation set, but it does not explicitly provide the specific percentages or counts for training/validation/test splits.
Hardware Specification No The paper mentions training times for models but does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions various software tools and models by name (e.g., 'scVI', 'Cell Ranger', 'Seurat') but does not specify their version numbers or other software dependencies required for replication.
Experiment Setup No The paper mentions that the penalty parameter 'λ is optimized via grid search' and specifies latent dimensions for some experiments, but it does not provide detailed hyperparameter values (e.g., learning rate, batch size, epochs) or other system-level training settings.