Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Deep Backtracking Counterfactuals for Causally Compliant Explanations

Authors: Klaus-Rudolf Kladny, Julius von Kügelgen, Bernhard Schölkopf, Michael Muehlebach

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate these properties experimentally on a modified version of MNIST and Celeb A.
Researcher Affiliation Academia Klaus-Rudolf Kladny EMAIL Max Planck Institute for Intelligent Systems, Tübingen, Germany Julius von Kügelgen EMAIL ETH Zurich, Switzerland Bernhard Schölkopf EMAIL Max Planck Institute for Intelligent Systems, Tübingen, Germany Michael Muehlebach EMAIL Max Planck Institute for Intelligent Systems, Tübingen, Germany
Pseudocode Yes Algorithm 1 mode_Deep BC Algorithm 2 stochastic_Deep BC
Open Source Code Yes Our source code is available at https://github.com/rudolfwilliam/Deep BC. Detailed instructions for reproducing all experiments are provided in the README.md file at the top level of the repository.
Open Datasets Yes We use Morpho-MNIST, a modified version of MNIST proposed by Castro et al. (2019),... We also investigate generating counterfactual celebrity images on the Celeb A data set (Liu et al., 2015).
Dataset Splits Yes We train all models with the following parameters: optimizer train/val. split ratio regularization max. # epochs Adam 0.8 early stopping 1000
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) are provided in the paper.
Software Dependencies No For all experiments, we use Py Torch (Paszke et al., 2019), Py Torch Lightning (Falcon, William and The Py Torch Lightning team, 2019) and normflows (Stimper et al., 2023).
Experiment Setup Yes We train all models with the following parameters: optimizer Adam train/val. split ratio 0.8 regularization early stopping max. # epochs 1000 Morpho-MNIST. We use the same training parameters for both normalizing flow models. Patience refers to the number of epochs without further decrease in validation loss that early stopping regularization waits. model batch size train batch size val. learning rate patience Flow 64 full 10 3 2 VAE 128 256 10 6 10 Celeb A. We use the same training parameters for all normalizing flow models. model batch size train batch size val. learning rate patience Flow 64 256 10 3 2 VAE 128 256 10 6 50