Prediction without Preclusion: Recourse Verification with Reachable Sets
Authors: Avni Kothari, Bogdan Kulynych, Tsui-Wei Weng, Berk Ustun
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct a comprehensive empirical study on the infeasibility of recourse on datasets from consumer finance. |
| Researcher Affiliation | Academia | Avni Kothari UCSF Bogdan Kulynych EPFL Tsui-Wei Weng UCSD Berk Ustun UCSD |
| Pseudocode | Yes | Algorithm 1 Get Reachable Set Require: x X, feature vector Require: A(x), action set for x R {x} A A(x) 1: while Find Action(x, A) is feasible do 2: a Find Action(x, A) 3: R R {x + a } 4: A A \ {a } Output R = RA(x) |
| Open Source Code | No | The paper states, 'We develop a Python package for recourse verification with reachable sets. Our package includes an API for practitioners to easily specify complex actionability constraints, and routines to test the actionability of recourse actions and counterfactual explanations.' However, it does not provide a direct link to the source code or explicitly state that the code is publicly released or open-source. |
| Open Datasets | Yes | We work with three classification datasets from consumer finance, where models that assign fixed predictions would preclude credit access (see Table 2). We process each dataset by encoding categorical attributes and discretizing continuous features. We use the processed dataset to fit a classification model using one of the following model classes: logistic regression (LR), XGBoost (XGB), and random forests (RF). We train each model using an 80%/20% train/test split and tune hyperparameters using standard k-CV. We report the performance of each model in Appendix C. [14] FICO. Explainable machine learning challenge, 2018. URL https://community.fico.com/s/explainable-machine-learning-challenge. [12] Dua, Dheeru and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci. edu/ml. [26] Kaggle. Give Me Some Credit. http://www.kaggle.com/c/Give Me Some Credit/, 2011. |
| Dataset Splits | No | The paper mentions 'We train each model using an 80%/20% train/test split and tune hyperparameters using standard k-CV.' While k-CV is a validation technique, it does not specify a distinct validation dataset split percentage or sample count, or a specific named validation set. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running its experiments, such as GPU or CPU models, or cloud computing specifications. |
| Software Dependencies | No | The paper mentions using 'logistic regression (LR), XGBoost (XGB), and random forests (RF)' model classes and developing a 'Python package', but it does not specify any software dependencies with version numbers (e.g., specific library versions for PyTorch, scikit-learn, or XGBoost). |
| Experiment Setup | No | The paper mentions 'tune hyperparameters using standard k-CV' but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or other detailed system-level training configurations for reproduction. |