Bayesian Persuasion for Algorithmic Recourse

Authors: Keegan Harris, Valerie Chen, Joon Kim, Ameet Talwalkar, Hoda Heidari, Steven Z. Wu

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, our numerical simulations on semi-synthetic data empirically demonstrate the benefits of using persuasion in the algorithmic recourse setting.
Researcher Affiliation Academia Keegan Harris Carnegie Mellon University keeganh@cmu.edu Valerie Chen Carnegie Mellon University valeriechen@cmu.edu Joon Sik Kim Carnegie Mellon University joonkim@cmu.edu Ameet Talwalkar Carnegie Mellon University talwalkar@cmu.edu Hoda Heidari Carnegie Mellon University hheidari@cmu.edu Zhiwei Steven Wu Carnegie Mellon University zstevenwu@cmu.edu
Pseudocode Yes We adapt the sampling-based approximation algorithm of Dughmi and Xu [13] to our setting in order to compute an -optimal and -approximate signaling policy in polynomial time, as shown in Algorithm 1 in Appendix G.
Open Source Code No The paper does not provide a concrete link or explicit statement about the availability of the source code for the methodology described.
Open Datasets Yes In this section, we provide experimental results using a semi-synthetic setting where decision subjects are based on individuals in the Home Equity Line of Credit (HELOC) dataset [15]. The HELOC dataset contains information about 9,282 customers who received a Home Equity Line of Credit.
Dataset Splits No The paper mentions using the HELOC dataset and training a logistic regression model, but does not specify any training, validation, or test dataset splits (e.g., percentages, sample counts, or predefined splits).
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in the experiments (e.g., Python, PyTorch, TensorFlow, or specific solvers).
Experiment Setup Yes In order to adapt the HELOC dataset to our strategic setting, we select four features and define five hypothetical actions A = {a;, a1, a2, a3, a4} that decision subjects may take in order to improve their observable features. Actions {a1, a2, a3, a4} result in changes to each of the decision subject s four observable features, whereas action a; does not. For simplicity, we view actions {a1, a2, a3, a4} as equally desirable to the decision maker, and assume they are all more desirable than a;. Using these four features, we train a logistic regression model that predicts whether an individual is likely to pay back a loan if given one, which will serve as the decision maker s realized assessment rule. For more information on how we constructed our experiments, see Appendix I.