Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics

Authors: Debjani Saha, Candice Schumann, Duncan Mcelfresh, John Dickerson, Michelle Mazurek, Michael Tschantz

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We develop a metric to measure comprehension of three such definitions demographic parity, equal opportunity, and equalized odds. We evaluate this metric using an online survey, and investigate the relationship between comprehension and sentiment, demographics, and the definition itself.
Researcher Affiliation Academia 1University of Maryland, College Park, MD 2ICSI, Berkeley, CA. Correspondence to: Michelle L. Mazurek <mmazurek@cs.umd.edu>.
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks. It describes the survey design and data analysis methods in prose.
Open Source Code Yes The full analysis script for both studies can be found on github.2
Open Datasets No The paper describes conducting online surveys to collect data but does not provide concrete access information (link, DOI, repository, or citation to a publicly available version) for the raw survey data or a predefined public dataset used in this context.
Dataset Splits No The paper describes validating its 'comprehension score' and mentions 'train' in the context of survey participants adhering to rules, but it does not specify dataset train/validation/test splits for machine learning model training or evaluation.
Hardware Specification No The paper does not provide any specific hardware specifications (e.g., GPU models, CPU types, cloud resources) used for running its experiments or analyses.
Software Dependencies Yes All statistical analysis was performed using R version 3.6.0.
Experiment Setup No The paper describes the design of online surveys, recruitment of participants, and qualitative coding of responses, but it does not provide details on experimental setup such as hyperparameters, batch sizes, or optimizer settings related to machine learning model training.