Procedural Fairness Through Decoupling Objectionable Data Generating Components

Authors: Zeyu Tang, Jialu Wang, Yang Liu, Peter Spirtes, Kun Zhang

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present experimental results on both simulated and real-world data. In Section 5.1, we demonstrate how causal fairness notions can violate Requirement II of procedural fairness. In Section 5.2, we present experimental results on UCI Adult data set (Becker & Kohavi, 1996).
Researcher Affiliation Collaboration 1Department of Philosophy, Carnegie Mellon University 2Computer Science and Engineering Department, University of California, Santa Cruz 3Machine Learning Department, Mohamed bin Zayed University of Artificial Intelligence 4Byte Dance Research
Pseudocode Yes Algorithm 1: The Value Instantiation Rule for Local Causal Modules
Open Source Code Yes Our implementation can be found in the Github code repository: https://github.com/zeyutang/Decouple Objectionable.
Open Datasets Yes In this section, we present experimental results on both simulated and real-world data. In Section 5.1, we demonstrate how causal fairness notions can violate Requirement II of procedural fairness. In Section 5.2, we present experimental results on UCI Adult data set (Becker & Kohavi, 1996). In this section, we present experimental results on the real-world Folktables data set (Ding et al., 2021).
Dataset Splits No We utilize the default train-test data split, and use 32,561 and 16,281 records for training and testing purposes, respectively.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory, cloud instances) used for running its experiments. It only provides a hypothetical example of computational costs for scalability illustration in Table 2 without specifying actual hardware.
Software Dependencies No The paper mentions specific components like "batch normalization" and "scaled exponential linear unit (SELU) activation function" and refers to "simulated annealing", but it does not provide specific version numbers for any software dependencies or libraries (e.g., Python version, deep learning framework version like PyTorch or TensorFlow).
Experiment Setup Yes the neural network contains two hidden layers with a hidden dimension max{5, din(Vi; G)}. We incorporate batch normalization (Ioffe & Szegedy, 2015) for each hidden layer and utilize the scaled exponential linear unit (SELU) activation function (Klambauer et al., 2017). The neural networks for local causal modules are optimized without any fairness constraint (Step 4 of Algorithm 2). ... we use simulated annealing (Kirkpatrick et al., 1983) to derive the reference point configuration function Reference Point( ) when focusing on the least advantaged individuals, as summarized in Equation (4).