DiBS: Differentiable Bayesian Structure Learning
Authors: Lars Lorch, Jonas Rothfuss, Bernhard Schölkopf, Andreas Krause
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In evaluations on simulated and real-world data, our method significantly outperforms related approaches to joint posterior inference. |
| Researcher Affiliation | Academia | Lars Lorch ETH Zurich Zurich, Switzerland lars.lorch@inf.ethz.ch Jonas Rothfuss ETH Zurich Zurich, Switzerland jonas.rothfuss@inf.ethz.ch Bernhard Schölkopf MPI for Intelligent Systems Tübingen, Germany bs@tuebingen.mpg.de Andreas Krause ETH Zurich Zurich, Switzerland krausea@ethz.ch |
| Pseudocode | Yes | Algorithm 1 Di BS with SVGD [19] for inference of p(G | D) |
| Open Source Code | Yes | Our Python JAX implementation of Di BS is available at: https://github.com/larslorch/dibs |
| Open Datasets | Yes | A widely used benchmark in structure learning is the proteomics data set by Sachs et al. [3]. The data contain N = 7, 466 continuous measurements of d = 11 proteins involved in human immune system cells as well as an established causal network of their signaling interactions. |
| Dataset Splits | No | The paper mentions generating 'training, held-out, and interventional data sets' but does not specify the explicit splits (e.g., percentages or exact counts) for training, validation, or test sets. |
| Hardware Specification | No | The paper mentions 'CPUs' and 'GPU wall times' but does not provide specific hardware models or specifications (e.g., exact CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper states 'Our Python JAX implementation' but does not provide specific version numbers for Python, JAX, or any other software dependencies. |
| Experiment Setup | Yes | In the remainder, Di BS is always run for 3,000 iterations and with k = d for inference of d-variable BNs, which leaves the matrix of edge probabilities unconstrained in rank. For Di BS, we use a particle count of M = 30 and k = d. We train our neural networks using Adam [73] for 30,000 iterations with a learning rate of 10^3 and batches of size 100. |