Algorithmic recourse under imperfect causal knowledge: a probabilistic approach

Authors: Amir-Hossein Karimi, Julius von Kügelgen, Bernhard Schölkopf, Isabel Valera

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments ( 7) on synthetic and semi-synthetic loan approval data, show the need for probabilistic approaches to achieve algorithmic recourse in practice, as point estimates of the underlying true SCM often propose invalid recommendations or achieve recourse only at higher cost. Importantly, our results also show that subpopulation-based recourse is the right approach to adopt when assumptions such as additive noise do not hold.
Researcher Affiliation Academia 1Max Planck Institute for Intelligent Systems, Tübingen, Germany 2Max Planck ETH Center for Learning Systems, Zürich, Switzerland 3Department of Engineering, University of Cambridge, United Kingdom 4Department of Computer Science, Saarland University, Saarbrücken, Germany
Pseudocode No The paper describes the gradient-based procedure and other methods in textual form (Section 6) but does not include a formally labeled "Pseudocode" or "Algorithm" block.
Open Source Code Yes A user-friendly implementation of all methods that only requires specification of the causal graph and a training set is available at https://github.com/amirhk/recourse.
Open Datasets Yes We also test our methods on a larger semi-synthetic SCM inspired by the German Credit UCI dataset [34].
Dataset Splits No The paper refers to using "synthetic and semi-synthetic loan approval data" and the "German Credit UCI dataset," and mentions a "training set" in relation to code availability, but it does not specify explicit percentages or counts for training, validation, or test splits.
Hardware Specification No The paper does not specify any details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions using Gaussian Processes (GPs) and Conditional Variational Autoencoders (CVAEs), and approaches like stochastic gradient descent, but it does not provide specific version numbers for any software libraries, frameworks, or dependencies.
Experiment Setup Yes We show average performance ± 1 standard deviation for Nruns = 100, NMC-samples = 100, and γLCB = 2.5.