Improving black-box optimization in VAE latent space using decoder uncertainty
Authors: Pascal Notin, José Miguel Hernández-Lobato, Yarin Gal
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate these advantages across several experimental settings in digit generation, arithmetic expression approximation and molecule generation for drug design. |
| Researcher Affiliation | Academia | Pascal Notin Department of Computer Science University of Oxford Oxford, UK pascal.notin@cs.ox.ac.uk José Miguel Hernández-Lobato Department of Engineering University of Cambridge Cambridge, UK jmh233@cam.ac.uk Yarin Gal Department of Computer Science University of Oxford Oxford, UK yarin@cs.ox.ac.uk |
| Pseudocode | Yes | Algorithm 1: Importance sampling estimator of MI Algorithm 2: Bayesian Optimization with uncertainty censoring Algorithm 3: Uncertainty-constrained gradient ascent |
| Open Source Code | Yes | We are open sourcing the code in the repository at the following address: https://github.com/pascalnotin/uncertainty_guided_optimization. |
| Open Datasets | Yes | We train a Character VAE (CVAE) [4] on 80, 000 expressions generated by the formal grammar, then perform optimization in the latent space. ... For both architectures, we trained our models on a set of 250k drug-like molecules from the ZINC dataset [29]. |
| Dataset Splits | Yes | Detailed information for datasets and model hyperparameter values used across experiments are detailed in Appendix B-E (one section dedicated to each experimental setting) and G (reproducibility). ... We consider 4 distinct sets of points in latent: embeddings into latent space of a random sample from the train and test sets... |
| Hardware Specification | Yes | Compute Resources: All experiments were run on NVIDIA V100 GPUs. |
| Software Dependencies | Yes | All models were implemented with Python 3.5.2 and PyTorch (version 1.7.1) [35]. |
| Experiment Setup | Yes | Detailed information for datasets and model hyperparameter values used across experiments are detailed in Appendix B-E (one section dedicated to each experimental setting) and G (reproducibility). |