A Fair Classifier Using Kernel Density Estimation

Authors: Jaewoong Cho, Gyeongjo Hwang, Changho Suh

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically show that our algorithm achieves greater or comparable performances against prior fair classifers in accuracy-fairness tradeoff as well as in training stability on both synthetic and benchmark real datasets.Our extensive experiments conducted both on synthetic and benchmark real datasets (Law School Admissions [36], Adult Census [6], Credit Card Default [6, 39], and COMPAS [2]) demonstrate that our algorithm achieves higher accuracy-fairness tradeoff relative to the states of the arts [42, 41, 44, 1, 25, 12], both w.r.t. demographic parity and equalized odds.
Researcher Affiliation Academia Jaewoong Cho EE, KAIST cjw2525@kaist.ac.kr Gyeongjo Hwang EE, KAIST hkj4276@kaist.ac.kr Changho Suh EE, KAIST chsuh@kaist.ac.kr
Pseudocode No The paper describes the proposed approach and its components using mathematical equations and textual descriptions, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper states 'We implement our algorithm in Py Torch [26]' but does not provide any link or explicit statement about releasing the source code for their methodology.
Open Datasets Yes We provide experimental results conducted on synthetic and four benchmark real datasets (COMPAS [2], Adult Census [6], Law School Admissions [36], and Credit Card Default [6, 39]).
Dataset Splits No The paper describes train and test splits for the datasets (e.g., "80% train set... and 20% test set" for synthetic, and specific train/test example counts for real datasets like COMPAS, Adult Census, Law School Admissions, and Credit Card Default), but does not explicitly mention a validation split.
Hardware Specification Yes We implement our algorithm in Py Torch [26], and all experiments are performed on a server with Ge Force GTX 1080 Ti GPUs.
Software Dependencies No The paper mentions "We implement our algorithm in Py Torch [26]" but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes We train fair classifiers with a 2-layer NN with 16 hidden nodes. For our approach, we set hyperparameters δ (of the Huber function) and h to be 1 and 0.1, respectively. ... We use the batch size of 512. We use Adam optimizer and its default parameters (β1, β2) = (0.9, 0.999) with the learning rate of 10 2.