An Axiomatic Theory of Provably-Fair Welfare-Centric Machine Learning

Authors: Cyrus Cousins

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental Validation Figure 1 presents a brief experiment on the lauded adult dataset, where the task is to predict whether income is above or below $50k/yr. We train p( ; w)-minimizing SVM, and find significant risk-variation between groups; generally low risk for the white and Asian groups, and high risk for the native American and other groups.
Researcher Affiliation Academia Cyrus Cousins Department of Computer Science Brown University cyrus_cousins@brown.edu
Pseudocode Yes Algorithm 1 Approximate Empirical Malfare Minimization via the Projected Subgradient Method
Open Source Code No The paper mentions 'assistance with the experimental code' in acknowledgments but does not provide an explicit statement or link to the source code for the described methodology.
Open Datasets Yes Experimental Validation Figure 1 presents a brief experiment on the lauded adult dataset, where the task is to predict whether income is above or below $50k/yr. We train p( ; w)-minimizing SVM, and find significant risk-variation between groups; generally low risk for the white and Asian groups, and high risk for the native American and other groups. The experimental setup is detailed in appendix A.1. [11] Dheeru Dua and Casey Graff. UCI machine learning repository, 2021. URL http://archive. ics.uci.edu/ml.
Dataset Splits No The paper discusses training and test performance but does not specify details on validation splits (e.g., percentages, counts, or k-fold cross-validation).
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were provided.
Software Dependencies No No specific software dependencies with version numbers were mentioned. The paper only refers to models like 'hinge-loss SVM' and 'logistic regressors'.
Experiment Setup No The paper states 'The experimental setup is detailed in appendix A.1.' but does not provide specific hyperparameter values or training configurations in the main text.