Commutative Lie Group VAE for Disentanglement Learning

Authors: Xinqi Zhu, Chang Xu, Dacheng Tao

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments by following the general unsupervised disentanglement learning setup, i.e. training models on a dataset without any supervision and evaluate the quality of disentanglement by metrics on synthetic datasets and by latent traversal inspection on real-world datasets.
Researcher Affiliation Collaboration 1School of Computer Science, Faculty of Engineering, The University of Sydney, Australia 2JD Explore Academy, JD.com, China.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/zhuxinqimac/Commutative Lie Group VAEPytorch.
Open Datasets Yes We conduct experiments on the two most popular disentanglement datasets: DSprites (Matthey et al., 2017) and 3DShapes (Kim & Mnih, 2018). We run our Commutative Lie Group VAE on real-world datasets including Celeb A (Liu ets al., 2014), Mnist (Lecun et al., 1998), and 3DChairs (Aubry et al., 2014).
Dataset Splits No The paper mentions a training set (9/10) and test set (1/10) split but does not specify a separate validation split or cross-validation setup.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types) used for running its experiments.
Software Dependencies No The paper mentions deep learning toolkits like TensorFlow (Abadi et al., 2015) and PyTorch (Paszke et al., 2019), but it does not specify the exact version numbers used for the experiments.
Experiment Setup No The paper states that implementation details are in Appendix 5, but the main text does not contain specific experimental setup details such as concrete hyperparameter values or training configurations.