Decision-Making with Auto-Encoding Variational Bayes
Authors: Romain Lopez, Pierre Boyeau, Nir Yosef, Michael Jordan, Jeffrey Regier
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing. In this challenging instance of multiple hypothesis testing, our proposed approach surpasses the current state of the art. |
| Researcher Affiliation | Academia | Romain Lopez1, Pierre Boyeau1, Nir Yosef1,2,3, Michael I. Jordan1,4, and Jeffrey Regier5 1 Department of Electrical Engineering and Computer Sciences, University of California, Berkeley 2 Chan-Zuckerberg Biohub, San Francisco 3 Ragon Institute of MGH, MIT and Harvard 4 Department of Statistics, University of California, Berkeley 5 Department of Statistics, University of Michigan |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode blocks or algorithms. |
| Open Source Code | Yes | Our code is available at http://github.com/Pierre Boyeau/decision-making-vaes |
| Open Datasets | Yes | We consider the MNIST dataset, which includes features x for each of the images of handwritten digits and a label c. [...] We split the MNIST dataset evenly between training and test datasets. For the labels 0 to 8, we use a total of 1,500 labeled examples. |
| Dataset Splits | No | The paper mentions splitting the MNIST dataset into training and test sets but does not specify the exact percentages or counts for these splits, nor does it explicitly detail a separate validation set split or how these splits can be reproduced. |
| Hardware Specification | Yes | In the p PCA experiment, training a single VAE takes 12 seconds, while step one and two of our method together take 53 seconds (on a machine with a single NVIDIA Ge Force RTX 2070 GPU). |
| Software Dependencies | No | The paper mentions software components like "neural network" and the "Gumbel-softmax trick", but it does not specify any programming languages, libraries, or solvers with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | Unless stated otherwise, we use 30 particles per iteration for training the models (as in [19]), 10,000 samples for reporting log-likelihood proxies, and 200 particles for making decisions. All results are averaged across five random initializations of the neural network weights. [...] AIS is computationally intensive, so we used 500 steps and 100 samples from the prior to keep the runtime manageable. |