Nonconvex Optimization for Regression with Fairness Constraints

Authors: Junpei Komiyama, Akiko Takeda, Junya Honda, Hajime Shimao

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed method is empirically evaluated by four real-world datasets. Unlike most methods, our method is capable of considering the possibly non-linear interaction of numeric sensitive attributes with the target variable. As we consider nonconvexity that naturally arises in measuring a correlation between s and y, we think this result is a first step that ties the study of nonconvex optimization in the context of fairness-aware machine learning. Figure 2 shows the results of our simulations.
Researcher Affiliation Academia 1The University of Tokyo, Tokyo, Japan. 2RIKEN AIP, Tokyo, Japan. 3Santa Fe Institute, New Mexico, United States.
Pseudocode No The paper describes mathematical formulations for its optimization problems (e.g., SDP, QCQP) but does not include a distinct pseudocode block or algorithm section.
Open Source Code Yes The source code used in the simulation is available at https://github.com/jkomiyama/fairregresion.
Open Datasets Yes The Communities and Crime (C&C) dataset, The COMPAS dataset (Angwin et al., 2016), The National Longitudinal Survey of Youth (NLSY) dataset6, The Law School Admissions Council (LSAC) dataset7. Footnote 6: https://www.bls.gov/nls/. Footnote 7: http://www2.law.ucla.edu/sander/Systemic/Data.htm.
Dataset Splits Yes We split the data into 5-folds: One was for validation dataset that was used to optimize the hyperparameters, and another was for the test dataset. The resting three folds were the training dataset.
Hardware Specification No The simulation here was conducted by using modern Xeon-core PC servers. While this specifies a type of processor, it lacks specific model numbers or detailed hardware specifications such as GPU models or memory amounts.
Software Dependencies No We solved the convex QCQP optimization by using the Gurobi optimizer. While Gurobi is named, no specific version number for Gurobi or any other software dependency is provided.
Experiment Setup Yes The hyperparameters were optimized in validation datasets among λ = {1.0, 10.0, 100.0} and γ = {0.1, 1.0, 10.0, 100.0}, where γ was the hyper-parameter of the RBF kernel K(x, y) = exp( γ(x y)2).