Shape your Space: A Gaussian Mixture Regularization Approach to Deterministic Autoencoders

Authors: Amrutha Saseendran, Kathrin Skubch, Stefan Falkner, Margret Keuper

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We chose our experiments to evaluate the proposed approach in terms of sampling quality and expressiveness. In the first line of experiments, we compare the quality of newly generated and reconstructed samples from our model with those from a variety of other VAE variants. In the second line, we investigate our method s capability to model discrete and complex structured inputs such as arithmetic expressions and molecules.
Researcher Affiliation Collaboration 1Bosch Center for Artificial Intelligence 2University of Siegen, Max Planck Institute for Informatics, Saarland Informatics Campus
Pseudocode No The paper does not include a figure, block, or section labeled 'Pseudocode', 'Algorithm', or 'Algorithm X', nor structured steps formatted like code.
Open Source Code Yes An implementation is available at https://github.com/boschresearch/GMM_DAE.
Open Datasets Yes We consider four dataset, MNIST [26], FASHIONMNIST [45], SVHN [36] and CELEBA [28] to evaluate the proposed method in image generation experiments. Given the ZINC250k dataset of drug molechules [20]
Dataset Splits Yes We provide all the experimental settings and hyperparameters used in the Appendix. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes]
Hardware Specification Yes All experiments were run on a GPU cluster, with single GPU per individual experiments. Since the cluster is part of a carbon-neutral framework, these experiments did not contribute to climate change.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries (e.g., 'PyTorch 1.9', 'Python 3.8').
Experiment Setup Yes We provide all the experimental settings and hyperparameters used in the Appendix. For a fair comparison, we use the same architecture and experimental settings in all the considered baseline evaluations. Please refer to the Appendix for more details on the experimental settings. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes]