Amortized Generation of Sequential Algorithmic Recourses for Black-Box Models

Authors: Sahil Verma, Keegan Hines, John P. Dickerson8512-8519

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide experimental validation of FASTAR using three real-world datasets and comparison using nine baselines.
Researcher Affiliation Collaboration 1 Arthur AI 2 University of Washington 3 University of Maryland
Pseudocode Yes Algorithm 1: Generate MDP from an Algorithmic Recourse Problem
Open Source Code No The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described.
Open Datasets Yes We use three datasets in our experiments: German Credit, Adult Income, and Credit Default (Dua and Casey 2017). These datasets have 20, 13 (omitted education-num as it has one to one mapping with education), and 23 features respectively.
Dataset Splits Yes We split the datasets into 80%-10%-10% for training, validation, and testing, respectively.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment, only mentioning general algorithms like PPO and GAE without specific library versions.
Experiment Setup Yes We trained a simple classifier: a neural network with two hidden layers (5 and 3 neurons) with Re LU activations. We use a particular instantiation of Algorithm 1 in the experiments: Action space: To produce sequential ARs, actions modify only one feature at a time. (...) Cost of action: We treat Dist F function as a hyperparameter and use several values for it in the experiments. Data manifold distance: Following previous work (...), we train k-Nearest Neighbor (KNN) algorithm on the training dataset and use it to find the ℓ1 distance of a given datapoint from its nearest neighbor (k = 1) in the dataset (Dist D). We use several values of the adherence factor λ in the experiments. Counterfactual state reward (CFReward): The agent receives a reward equal to the probability of its state belonging to the desired class (between 0 and 1). However, when a counterfactual state is reached, the agent is rewarded with 100 points. Discount Factor: We use a discount factor γ = 0.99.