Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks
Authors: Lars Mescheder, Sebastian Nowozin, Andreas Geiger
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate that our model is able to learn rich posterior distributions and show that the model is able to generate compelling samples for complex data sets. |
| Researcher Affiliation | Collaboration | 1Autonomous Vision Group, MPI T ubingen 2Microsoft Research Cambridge 3Computer Vision and Geometry Group, ETH Z urich. Correspondence to: Lars Mescheder <lars.mescheder@tuebingen.mpg.de>. |
| Pseudocode | Yes | Algorithm 1 Adversarial Variational Bayes (AVB) |
| Open Source Code | No | The paper does not provide any links to source code or state that code is made available. |
| Open Datasets | Yes | We applied this to the Eight School example from Gelman et al. (2014). [...] In addition, we trained deep convolutional networks based on the DC-GAN-architecture (Radford et al., 2015) on the binarized MNIST-dataset (Le Cun et al., 1998). An additional experiment on the celeb A dataset (Liu et al., 2015) can be found in the Supplementary Material. |
| Dataset Splits | No | The paper mentions a test set size for MNIST but does not provide complete training/validation/test splits, percentages, or refer to a standard split that includes all three for any dataset. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for running its experiments. |
| Software Dependencies | No | The paper mentions tools like STAN and ITE-package, but does not provide specific version numbers for these or other general software dependencies (e.g., Python, deep learning frameworks) used in their experimental setup. |
| Experiment Setup | Yes | Both the encoder and decoder are parameterized by 2-layer fully connected neural networks with 512 hidden units each. [...] The adversary is parameterized by two neural networks with two 512-dimensional hidden layers each [...]. [...] For the decoder network, we use a 5-layer deep convolutional neural network. [...] For the adversary, we replace the fully connected neural network acting on z and x with a fully connected 4-layer neural networks with 1024 units in each hidden layer. [...] For every posterior update step we performed two steps for the adversary. |