Diffusion Variational Autoencoders

Authors: Luis A. Perez Rey, Vlado Menkovski, Jim Portegies

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that the VAE is indeed capable of capturing topological properties for datasets with a known underlying latent structure derived from generative processes such as rotations and translations. and 4 Experiments We have implemented 1 VAEs with latent spaces of ddimensional spheres, a flat two-dimensional torus, a torus embedded in R3, the SO(3) and real projective spaces RPd . For all our experiments we used multi-layer perceptrons for the encoder and decoder with three and two hidden layers respectively.
Researcher Affiliation Academia Luis A. Perez Rey , Vlado Menkovski and Jim Portegies Eindhoven University of Technology, Eindhoven, The Netherlands
Pseudocode No No explicit pseudocode or algorithm blocks were found.
Open Source Code Yes 1https://github.com/luis-armando-perez-rey/diffusion vae
Open Datasets Yes Mainly as a first test of our algorithm, we trained VAEs on binarized MNIST [Salakhutdinov and Murray, 2008].
Dataset Splits No The paper mentions training and test datasets but does not explicitly detail training/validation/test splits or specific percentages for validation.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) were provided for the experiments.
Software Dependencies No The paper does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For all our experiments we used multi-layer perceptrons for the encoder and decoder with three and two hidden layers respectively. and We have set S = 10 throughout the presented results. and use an output layer with a tanh activation function to obtain 10 7 t 10 5.