The Thermodynamic Variational Objective
Authors: Vaden Masrani, Tuan Anh Le, Frank Wood
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We use the TVO to learn both discrete and continuous deep generative models and empirically demonstrate state of the art model and inference network learning. |
| Researcher Affiliation | Academia | Vaden Masrani1, Tuan Anh Le2, Frank Wood1 1Department of Computer Science, University of British Columbia 2Department of Brain and Cognitive Sciences, MIT |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code to reproduce all experiments is available at: https://github.com/vmasrani/tvo. |
| Open Datasets | Yes | We use the binarized MNIST dataset with the standard train/validation/test split of 50k/10k/10k [35]. |
| Dataset Splits | Yes | We use the binarized MNIST dataset with the standard train/validation/test split of 50k/10k/10k [35]. |
| Hardware Specification | No | The paper acknowledges support from Compute Canada and Intel, but does not specify exact hardware models (e.g., GPU/CPU models, memory) used for the experiments. |
| Software Dependencies | No | The paper mentions using the Adam optimizer but does not specify version numbers for any software or libraries used in the implementation. |
| Experiment Setup | Yes | We train a sigmoid belief network, described in detail in Appendix I, using the TVO with the Adam optimizer. ... For each value of β1 we train the discrete generative model for K 2 {2, 5, 10, . . . , 50} and S 2 {2, 5, 10, 50}. |