VACA: Designing Variational Graph Autoencoders for Causal Queries

Authors: Pablo Sánchez-Martin, Miriam Rateike, Isabel Valera8159-8168

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental As a result, and as shown by our empirical results, VACA accurately approximates the interventional and counterfactual distributions on diverse SCMs. ... We show in extensive synthetic experiments that VACA outperforms competing methods (Karimi et al. 2020; Khemakhem et al. 2021) on complex datasets at estimating not only the mean of the interventional/counterfactual distribution (as in previous work), but also the overall distribution (measured in terms of Maximum Mean Discrepancy (Gretton et al. 2012)).
Researcher Affiliation Academia 1 Max Planck Institute for Intelligent Systems, T ubingen, Germany 2 Department of Computer Science of Saarland University, Saarbr ucken, Germany psanchez@tue.mpg.de, mrateike@tue.mpg.de, ivalera@cs.uni-saarland.de
Pseudocode No The paper does not contain any explicit 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps formatted like code.
Open Source Code Yes Moreover, our code is publicly available at Git Hub1. 1https://github.com/psanch21/VACA
Open Datasets Yes Finally, we show a practical use-case in which VACA is used to assess counterfactual fairness of different classifiers trained on the real-world German Credit dataset (Dua and Graff 2019a), as well as to learn counterfactually fair classifiers without compromising performance. ... and the Adult datasets (Dua and Graff 2019b)
Dataset Splits No The paper mentions that 'all model hyperparameters have been cross-validated using a similar computational budget' but does not provide specific details on the train, validation, or test dataset splits (e.g., percentages, sample counts, or references to predefined splits).
Hardware Specification No The paper does not provide specific hardware details (such as GPU/CPU models, memory specifications, or cloud instance types) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies or version numbers (e.g., Python 3.8, PyTorch 1.9, CUDA 11.1) needed to replicate the experimental environment.
Experiment Setup Yes We compute all results over the same 10 random seeds and report mean and standard deviation. Refer to Appendix E for a complete description of the experimental setup.