Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Differentiable Causal Discovery for Latent Hierarchical Causal Models
Authors: Parjanya Prashant, Ignavier Ng, Kun Zhang, Biwei Huang
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct empirical studies to examine the efficacy of our differentiable causal discovery method. Specifically, we experiment with synthetic data in Section 6.1 and real image data in Section 6.2. |
| Researcher Affiliation | Academia | 1University of California San Diego 2Carnegie Mellon University 3Mohamed bin Zayed University of Artificial Intelligence |
| Pseudocode | No | The paper describes the methodology in prose (Section 5) and does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Software for implementation: https://github.com/parjanya20/latent-causal-models |
| Open Datasets | Yes | In this section, we learn a latent causal graph for the MNIST dataset (Le Cun et al., 2010). ... We evaluate them on the CMNIST dataset (Arjovsky et al., 2019) and Celeb A dataset (Liu et al., 2015). |
| Dataset Splits | Yes | CMNIST details: For the colored MNIST dataset, we have around 12,000 training samples and 2,000 test samples. ... Celeb A details: We use approximately 160,000 samples for training. For the test set, we evaluate the model exclusively on two groups: blonde males and non-blonde females. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models) used for running the experiments. |
| Software Dependencies | No | The paper mentions using the Adam optimizer but does not specify version numbers for any key software components or libraries. |
| Experiment Setup | Yes | The VAE encoder is a two-hidden-layer fully connected neural network with 64 and 32 hidden neurons, followed by Re LU activations. ... Our model is trained using the Adam optimizer with a learning rate of 1 10 3 for 400 epochs. We use a batch size of 32. ... The temperature for gumbel softmax starts at 100 and exponentially decreases to 0.1 at 120 epochs and then stays constant. λ2 is 0.03 and λ3 is exponentially increased from 10 3 to 10 at 100 epochs and then stays constant. We used a Adam optimizer with learning rate 1e-3. |