C-Disentanglement: Discovering Causally-Independent Generative Factors under an Inductive Bias of Confounder

Authors: Xiaoyu Liu, Jiaxin Yuan, Bang An, Yuancheng Xu, Yifan Yang, Furong Huang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on both synthetic and real-world datasets. Our method demonstrates competitive results compared to various SOTA baselines in obtaining causally disentangled features and downstream tasks under domain shifts.
Researcher Affiliation Academia 1 Department of Computer Science, 2 Department of Mathematics University of Maryland, College Park {xliu1231, jyuan98, bangan, ycxu, yang7832, furongh}@umd.edu
Pseudocode Yes Algorithm 1 Train a VAE such that the latent representation is causally disentangled Input: Number of labels NC, training data X with labels c, ratio of each categories/confounders P(C = c) in the training set, dimension of latent space D
Open Source Code Yes code available here
Open Datasets Yes Datasets. we evaluate cd VAE on three datasets: synthetic datasets 3dshape [3] and Candle [21], and real-world dataset Celeb A [15].
Dataset Splits No The paper mentions 'training images' and 'target domain' but does not specify a validation dataset or split, nor does it describe a cross-validation setup.
Hardware Specification Yes The experiments are conducted on 4 NVIDIA Ge Force RTX 2080Ti.
Software Dependencies No The paper does not provide specific version numbers for software dependencies (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup No The paper describes the loss function and some general experimental settings like repeating experiments 5 times with different seeds. However, it does not explicitly provide specific hyperparameter values such as learning rate, batch size, number of epochs, or detailed optimizer settings in the main text or appendix.