Incentivizing Recourse through Auditing in Strategic Classification

Authors: Andrew Estornell, Yatong Chen, Sanmay Das, Yang Liu, Yevgeniy Vorobeychik

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments using four common datasets: Adult Income [Kohavi and others, 1996], Law School [Wightman and Council, 1998], German Credit [Dua and Graff, 2019], and Lending Club [Lending Club, 2018]... We measure the fraction of the population performing recourse or manipulation, as well as the average cost incurred by agents for either action (Figure 1).
Researcher Affiliation Academia 1Washington University in Saint Louis 2University of California Santa Cruz 3George Mason University
Pseudocode No The paper does not contain any sections or figures explicitly labeled "Pseudocode" or "Algorithm", nor are there any structured code-like blocks.
Open Source Code No The paper does not contain any statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes We conduct experiments using four common datasets: Adult Income [Kohavi and others, 1996], Law School [Wightman and Council, 1998], German Credit [Dua and Graff, 2019], and Lending Club [Lending Club, 2018]
Dataset Splits Yes Full experimental details are provided in the supplement Section A.5. We use a 70/30 train/test split, and for all datasets we train a Logistic Regression and a 2-layer Neural Network model... We use 5-fold cross validation for hyperparameter tuning...
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, or memory) used to run the experiments.
Software Dependencies No The paper mentions using "Logistic Regression" and "2-layer Neural Networks" but does not list specific software packages with version numbers (e.g., Python, PyTorch, scikit-learn, TensorFlow versions).
Experiment Setup Yes Full experimental details are provided in the supplement Section A.5. We use a 70/30 train/test split, and for all datasets we train a Logistic Regression and a 2-layer Neural Network model. We normalize all features... We use 5-fold cross validation for hyperparameter tuning, and use the Adam optimizer with a batch size of 1024 for 1000 epochs, with a learning rate of 1e-3 and a weight decay of 1e-4.