Identifiable Latent Polynomial Causal Models through the Lens of Change

Authors: Yuhang Liu, Zhen Zhang, Dong Gong, Mingming Gong, Biwei Huang, Anton van den Hengel, Kun Zhang, Javen Qinfeng Shi

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results, obtained from both synthetic and real-world data, validate our theoretical contributions concerning identifiability and consistency.
Researcher Affiliation Academia 1 Australian Institute for Machine Learning, The University of Adelaide, Australia 2 School of Computer Science and Engineering, The University of New South Wales, Australia 3 School of Mathematics and Statistics, The University of Melbourne, Australia 4 Halicio glu Data Science Institute (HDSI), University of California San Diego, USA 5 Department of Philosophy, Carnegie Mellon University, USA 6 Mohamed bin Zayed University of Artificial Intelligence, United Arab Emirates
Pseudocode No Figure 8 depicts the proposed method to learn polynomial causal representations with non-Gaussian noise. Figure 9 depicts the proposed method to learn polynomial causal representations with Gaussian noise.
Open Source Code No The paper does not contain any explicit statement about releasing code or a link to a repository.
Open Datasets Yes Image Data We further verify the proposed identifiability results and method on images from the chemistry dataset proposed in Ke et al. (2021)...
Dataset Splits No Synthetic Data We first conduct experiments on synthetic data, generated by the following process: we divide latent noise variables into M segments, where each segment corresponds to one value of u as the segment label. Within each segment, the location and scale parameters are respectively sampled from uniform priors.
Hardware Specification No The paper does not provide any specific hardware details used for running the experiments.
Software Dependencies No Instead, we straightforwardly use the Py Torch (Paszke et al., 2017) implementation of the method of Jankowiak & Obermeyer (2018), which computes implicit reparameterization using a closed-form approximation of the probability density function derivative.
Experiment Setup Yes For experiments on the synthetic data and f MRI data, the encoder, decoder, MLP for λ, and MLP for prior are implemented by using 3-layer fully connected networks and Leaky-Re LU activation functions. For optimization, we use Adam optimizer with learning rate 0.001.