A Geometric Perspective on Variational Autoencoders

Authors: Clément Chadebec, Stephanie Allassonniere

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed sampling method outperforms more advanced VAE models in terms of Frechet Inception Distance [20] and Precision and Recall [45] scores on four benchmark datasets. We also discuss and show that it can benefit more recent VAEs as well. An implementation is available on github. We show that the method appears robust to dataset size changes and outperforms even more strongly peers when only smaller sample sizes are considered.
Researcher Affiliation Academia Clément Chadebec Université Paris Cité, INRIA, Inserm, SU Centre de Recherche des Cordeliers clement.chadebec@inria.fr Stéphanie Allassonnière Université Paris Cité, INRIA, Inserm, SU Centre de Recherche des Cordeliers stephanie.allassonniere@inria.fr
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. Procedures are described in narrative text and mathematical equations.
Open Source Code Yes An implementation is available on github.
Open Datasets Yes Figure 2 shows a qualitative comparison between the resulting generated samples for MNIST [30] and CELEBA [31], see Appendix C for SVHN [37] and CIFAR 10 [27]. In addition, we also evaluate the model on a data augmentation task with neuroimaging data from OASIS [33].
Dataset Splits Yes For each experiment, the best retained model is again the one achieving the best ELBO on the validation set the size of which is set as 20% of the train size.
Hardware Specification Yes Generating 1k samples on CELEBA takes approx. 5.5 min for our method vs. 4 min for a 10-component GMM on a GPU V100-16GB. This work was granted access to the HPC resources of IDRIS under the allocation AD011013517 made by GENCI (Grand Equipement National de Calcul Intensif).
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). It mentions using code and hyperparameters from authors if available but no specific versions.
Experiment Setup Yes In the following, all the models share the same auto-encoding neural network architectures and we used the code and hyper-parameters provided by the authors if available2. See Appendix D for models descriptions and the comprehensive experimental set-up.