When Is Unsupervised Disentanglement Possible?

Authors: Daniella Horan, Eitan Richardson, Yair Weiss

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We leverage recent advances in deep generative models to construct manifolds of highly realistic images for which the ground truth latent representation is known, and test whether modern and classical methods succeed in recovering the latent factors. For many different manifolds, we find that a spectral method that explicitly optimizes local isometry and non-Gaussianity consistently finds the correct latent factors, while baseline deep autoencoders do not.
Researcher Affiliation Academia Daniella Horan and Eitan Richardson and Yair Weiss School of Computer Science and Engineering The Hebrew University of Jerusalem Jerusalem, Israel {daniella.horan,eitan.richardson,yair.weiss}@mail.huji.ac.il
Pseudocode No The paper describes algorithms in text but does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper states that baseline autoencoders were taken from the code of a recent paper [15], but does not provide concrete access to the authors' own source code for their proposed methods.
Open Datasets No The paper describes creating its own synthetic manifolds using GANSpace [10] and Style GAN2 [16], but does not provide concrete access information (link, DOI, or formal citation for dataset release) for these generated datasets.
Dataset Splits No The paper describes training models and evaluating performance but does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using tools like GANSpace and Style GAN2, and code from a prior paper [15], but does not provide specific software dependency details with version numbers.
Experiment Setup Yes HLLE has a single free parameter k (the number of neighbors) and we find it by choosing the k for which the local isometry score is maximal. Results for all auto-encoders are for 100 iterations... The results in figure 7 show reconstructions with 1000 iterations for HAE and 100 iterations for the vanilla AE.