Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Lookahead Counterfactual Fairness

Authors: Zhiqun Zuo, Tian Xie, Xuwei Tan, Xueru Zhang, Mohammad Mahdi Khalili

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on both synthetic and real data to validate the proposed algorithm. We conduct experiments on both synthetic and real data to validate the proposed method.
Researcher Affiliation Academia Zhiqun Zuo EMAIL Department of Computer Science and Engineering The Ohio State University Tian Xie EMAIL Department of Computer Science and Engineering The Ohio State University Xuwei Tan EMAIL Department of Computer Science and Engineering The Ohio State University Xueru Zhang EMAIL Department of Computer Science and Engineering The Ohio State University Mohammad Mahdi Khalili EMAIL Department of Computer Science and Engineering The Ohio State University
Pseudocode Yes Algorithm 1 Training a predictor with perfect LCF Input: Training data D = {(x(i), y(i), a(i))}n i=1, response parameter η. 1: Estimate the structural equations 7 using D to determine parameters α, β, w, and γ. 2: For each data point (x(i), y(i), a(i)), draw m samples u(i)[j] m j=1 from conditional distribution Pr{U|X = x(i), A = a(i)} and generate counterfactual ˇy(i)[j] associated with u(i)[j] based on structural equations 7. 3: Compute p1 1 2η(||w α||2 2+γ2). 4: Solve the following optimization problem, ˆp2, ˆp3, ˆθ = arg min p2,p3,θ j=1 l g ˇy(i)[j], u(i)[j] , y(i) where g ˇy(i)[j], u(i)[j] = p1 ˇy(i)[j] 2 + p2ˇy(i)[j] + p3 + hθ(u), θ is a parameter for function h, and l is a loss function. Output: ˆp2, ˆp3, ˆθ
Open Source Code Yes 1The code for this paper is available in https://github.com/osu-srml/LCF.
Open Datasets Yes We further measure the performance of our proposed method using the Law School Admission Dataset Wightman (1998). We measure the performance of our proposed method using Loan Prediction Problem Dataset (kag). Loan Prediction Problem Dataset kaggle.com. https://www.kaggle.com/datasets/ altruistdelhite04/loan-prediction-problem-dataset. [Accessed 20-10-2024].
Dataset Splits Yes We split the dataset into the training/validation/test set at 60%/20%/20% ratio randomly and repeat the experiment 5 times. We partitioned the data into training, validation, and test sets with 60%/20%/20% ratio.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) are provided for running the experiments. The text mentions training with Adam optimizer and MCMC method, but not the hardware these were run on.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) are provided. The paper mentions 'Adam optimization' and 'Markov chain Monte Carlo (MCMC) method Geyer (1992)' but without specific software names or versions.
Experiment Setup Yes Based on our observation, Adam optimization with a learning rate equal to 10 3 and 2000 epochs gives us the best performance. To train g(ˇy, u), we follows Algorithm 1 with m = 100. For each given data, we sampled m = 500 different k s from this conditional distribution.