On Disentangled Representations Learned from Correlated Data

Authors: Frederik Träuble, Elliot Creager, Niki Kilbertus, Francesco Locatello, Andrea Dittadi, Anirudh Goyal, Bernhard Schölkopf, Stefan Bauer

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To this end, we report a large-scale empirical study to systematically assess the effect of induced correlations between pairs of factors of variation in training data on the learned representations. We present the first large-scale empirical study (4260 models)1 that examines how modern disentanglement learners perform when ground truth factors of the observational data are correlated.
Researcher Affiliation Collaboration 1Max Planck Institute for Intelligent Systems, Tübingen, Germany 2University of Toronto and Vector Institute 3Helmholtz AI, Munich 4Amazon (work partly done when FL was at ETH Zurich and MPI-IS) 5Technical University of Denmark 6Mila and Université de Montréal 7CIFAR Azrieli Global Scholar.
Pseudocode No The paper discusses various algorithms and methods (e.g., VAEs, Ada-GVAE) but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Code for reproducing experiments is available under https: //github.com/ftraeuble/disentanglement_lib
Open Datasets Yes Most popular datasets in the disentanglement literature exhibit perfect independence in their Fo V such as d Sprites (Higgins et al., 2017a), Cars3D (Reed et al., 2015), Small NORB (Le Cun et al., 2004), Shapes3D (Kim & Mnih, 2018) or MPI3D variants (Gondal et al., 2019).
Dataset Splits No The paper mentions 'training data' and 'test data' (referring to OOD data) but does not specify an explicit 'validation' dataset split or its proportion/methodology.
Hardware Specification Yes Each model was trained for 300,000 iterations on Tesla V100 GPUs.
Software Dependencies No The paper mentions various models and frameworks such as 'variational autoencoders (VAEs)', 'β-VAE', 'Factor VAE', 'Annealed VAE', 'DIP-VAEI', 'DIP-VAE-II' and 'β-TC-VAE', but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes Each model was trained for 300,000 iterations on Tesla V100 GPUs. ...each with 6 hyperparameter settings and 5 random seeds.