Learning Models for Actionable Recourse
Authors: Alexis Ross, Himabindu Lakkaraju, Osbert Bastani
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the efficacy of our approach with extensive experiments on real data. |
| Researcher Affiliation | Collaboration | Alexis Ross Harvard University Allen Institute for Artificial Intelligence alexisr@allenai.org Himabindu Lakkaraju Harvard University hlakkaraju@seas.harvard.edu Osbert Bastani University of Pennsylvania obastani@seas.upenn.edu |
| Pseudocode | No | The paper describes the algorithm steps using text and mathematical equations but does not include a distinct pseudocode or algorithm block. |
| Open Source Code | Yes | Our code is available at https://github.com/alexisjihyeross/adversarial_recourse. |
| Open Datasets | Yes | The first contains adult income information from the 1994 United States Census Bureau [Dua and Graff, 2017]... The second contains information collected by Propublica about criminal defendants compas recidivism scores [Angwin et al., 2016]... The third dataset represents bail outcomes from two different U.S. state courts from 1990-2009 [Schmidt and Witte, 1988]... The fourth dataset is the german credit dataset [Dua and Graff, 2017] |
| Dataset Splits | Yes | We randomly split each dataset into 80% train and 20% validation sets. |
| Hardware Specification | No | The paper does not specify the hardware used for running the experiments (e.g., CPU, GPU models, or memory). |
| Software Dependencies | No | The paper mentions software like 'alibi implementation' and 'LIME' but does not provide specific version numbers for these or other dependencies. |
| Experiment Setup | Yes | All models are neural networks with 3 100-node hidden layers, dropout probability 0.3, and tanh activations. For evaluation, we choose the epoch achieving the highest validation F1 score. We experimented with λ values between 0.0 to 2.0 in increments of 0.2... we set δmax = 0.75 after standardizing features. |