Improving Relational Regularized Autoencoders with Spherical Sliced Fused Gromov Wasserstein

Authors: Khai Nguyen, Son Nguyen, Nhat Ho, Tung Pham, Hung Bui

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conduct extensive experiments to show that the new proposed autoencoders have favorable performance in learning latent manifold structure, image generation, and reconstruction. In this section, we conduct extensive experiments on MNIST (Le Cun et al., 1998) and Celeb A datasets (Liu et al., 2015) to evaluate the performance of s-DRAE, ps-DRAE and m(p)s-DRAE with various autoencoders
Researcher Affiliation Collaboration Khai Nguyen Vin AI Research, Vietnam v.khainb@vinai.io Son Nguyen Vin AI Research, Vietnam v.son3@vinai.io Nhat Ho University of Texas, Austin Vin AI Research, Vietnam minhnhat@utexas.edu Tung Pham Vin AI Research, Vietnam v.tungph4@vinai.io Hung Bui Vin AI Research, Vietnam v.hungbh1@vinai.io
Pseudocode Yes To generate samples from v MF, we follow the procedure in (Ulrich, 1984), which is described in Algorithm 1 in Appendix B.
Open Source Code No The paper does not provide concrete access to source code for the methodology described (e.g., a specific repository link, explicit code release statement, or code in supplementary materials).
Open Datasets Yes In this section, we conduct extensive experiments on MNIST (Le Cun et al., 1998) and Celeb A datasets (Liu et al., 2015) to evaluate the performance of s-DRAE, ps-DRAE and m(p)s-DRAE with various autoencoders
Dataset Splits Yes For s-DRAE, ps-DRAE and m(p)s-DRAE (10 v MF components with uniform weights and same concentration parameters), we search for κ {1, 5, 10, 50, 100} which gives the best FID score on the validation set of the corresponding dataset.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions using "Adam optimizer" but does not specify version numbers for any software, libraries, or frameworks required to replicate the experiment.
Experiment Setup Yes To guarantee the fairness of the comparison, we use the same autoencoder architecture, Adam optimizer with learning rate = 0.001, β1 = 0.5 and β2 = 0.999; batch size = 100; latent size = 8 on MNIST and 64 on Celeb A; coefficient λ=1; fused parameter β = 0.1. We set the number of components K = 10 for autoencoder with a mixture of Gaussian distribution as the prior. More detailed descriptions of these settings are in Appendix F.