Pairwise Fairness for Ranking and Regression

Authors: Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Serena Wang5248-5255

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments illustrate the broad applicability and trade-offs of these methods. We illustrate our proposals on five ranking problems and two regression problems.
Researcher Affiliation Industry Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Serena Wang Google Research 1600 Amphitheatre Pkwy, Mountain View, CA 94043 {hnarasimhan, acotter, mayagupta, serenawang}@google.com
Pseudocode No The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format.
Open Source Code Yes Code available at: https://github.com/google-research/ google-research/tree/master/pairwise fairness
Open Datasets Yes Wiki Talk Page Comments This public dataset contains 127,820 comments from Wikipedia Talk Pages labeled with whether or not they are toxic (i.e. contain rude, disrespectful or unreasonable content (Dixon et al. 2018)). We use the Communities and Crime dataset from UCI (Dua and Graff 2017)
Dataset Splits Yes The datasets used are split randomly into training, validation and test sets in the ratio 1/2:1/4:1/4, with the validation set used to tune the relevant hyperparameters.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU/CPU models or memory specifications.
Software Dependencies No The paper mentions using the 'Tensorflow constrained optimization toolbox' but does not specify version numbers for this or any other software dependencies.
Experiment Setup Yes The datasets used are split randomly into training, validation and test sets in the ratio 1/2:1/4:1/4, with the validation set used to tune the relevant hyperparameters. We train linear ranking functions f : R2 R and impose a cross-group equal opportunity with constrained optimization by constraining |A0>1 A1>0| 0.01. All methods trained a two-layer neural network model with 10 hidden nodes. We learn a convolutional neural network model with the same architecture used in Dixon et al. (2018).