Differentiable DAG Sampling
Authors: Bertrand Charpentier, Simon Kibler, Stephan Günnemann
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our extensive experiments, we compare VI-DP-DAG to other differentiable DAG learning baselines on synthetic and real datasets. |
| Researcher Affiliation | Academia | Bertrand Charpentier, Simon Kibler, Stephan Günnemann Department of Informatics & Munich Data Science Institute Technical University Munich {charpent, kibler, guennemann}@in.tum.de |
| Pseudocode | Yes | Figure 2: Differentiable DAG sampling in Python |
| Open Source Code | Yes | We provide all datasets and the model code at the project page 1. https://www.daml.in.tum.de/differentiable-dag-sampling |
| Open Datasets | Yes | We use the Sachs dataset which measures the expression level of different proteins and phospholipids in human cells (Sachs et al., 2005). We also use the pseudo-real Syn TRe N dataset sampled from a generator that was designed to create synthetic transcriptional regulatory networks and produces simulated gene expression data that approximates experimental data (Van den Bulcke et al., 2006). ... We provide all datasets and the model code at the project page 1. |
| Dataset Splits | Yes | We split all datasets in training/validation/test sets with 80%/10%/10%. |
| Hardware Specification | Yes | We evaluate the training time of all models on a single GPU (NVIDIA GTX 1080 Ti, 11 GB memory). |
| Software Dependencies | No | The paper mentions using optimizers like Adam (Kingma & Ba, 2015) and Rms Prop (Tieleman & Hinton, 2012), but it does not specify software versions for any programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | In our experiments, VI-DP-DAG parametrizes the permutation probability Pψ(Π) with Gumbel Sinkhorn or Gumbel-Top-k trick, the edge probability Pφ(U) with Gumbel-Softmax distribution and the causal mechanisms fi,θ with a 3 layers Multi-Layer Perceptron (MLP). We use early stopping and perform a grid search over the permutation probability parametrization (i.e. Gumbel-Sinkhorn or Gumbel-Top-k), the fixed prior probability Pprior(Uij) [1e 2, 1e 1] and the regularization factor λ [0, 1e 1]. Finally, all temperature parameters are fixed to τ = 1 in all experiments. |