Consistent Plug-in Classifiers for Complex Objectives and Constraints

Authors: Shiv Kumar Tavker, Harish Guruprasad Ramaswamy, Harikrishna Narasimhan

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show empirically that our algorithm is competitive with prior methods, while being more robust to choices of hyper-parameters. We present experiments on benchmark fairness datasets and show that the proposed algorithm performs at least as well as existing methods, while being more robust to choices of hyper-parameters.
Researcher Affiliation Collaboration Shiv Kumar Tavker Indian Institute of Technology Madras, India shivtavker@smail.iitm.ac.in Harish G. Ramaswamy Indian Institute of Technology Madras, India hariguru@cse.iitm.ac.in Harikrishna Narasimhan Google Research, USA hnarasimhan@google.com
Pseudocode Yes Algorithm 1 The Split Bayes-Frank-Wolfe (SBFW) Algorithm; Algorithm 2 Plug-in Method for LMOC
Open Source Code Yes Code available at: https://github.com/shivtavker/constrained-classification.
Open Datasets Yes We ran experiments on five datasets: (1) COMPAS... (2) Communities & Crime... (3) Law School... (4) Adult... (5) Default... All experiments in this paper were carried out with publicly available datasets. [13] A. Frank and A. Asuncion. UCI machine learning repository. URL: http://archive.ics.uci. edu/ml, 2010.
Dataset Splits No We used 2/3-rd of the data for training and 1/3-rd for testing. The paper specifies a train/test split but does not mention a separate validation split or how validation was performed.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It only states that 'All experiments use a linear model.'
Software Dependencies No The paper mentions software components like 'logistic regression' and 'linear model' but does not specify any version numbers for these or other software libraries/dependencies.
Experiment Setup Yes The hyper-parameters were tuned separately for each method using the heuristic of Cotter et al. (2019) [10] to trade-off between the objective and the violations. Figure 3: Robustness to hyper-parameters: Train G-mean and equal opportunity violation for six step sizes (10-4, 10-3, . . . , 10) on the COMPAS dataset.