Stochastic Differentially Private and Fair Learning

Authors: Andrew Lowy, Devansh Gupta, Meisam Razaviyayn

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our numerical experiments show that the proposed algorithm consistently offers significant performance gains over the state-of-the-art baselines, and can be applied to larger scale problems with non-binary target/sensitive attributes.
Researcher Affiliation Academia Andrew Lowy University of Southern California lowya@usc.edu Devansh Gupta Indraprastha Institute of Information Technology, Delhi devansh19160@iiitd.ac.in Meisam Razaviyayn University of Southern California razaviya@usc.edu
Pseudocode Yes Algorithm 1 DP-FERMI Algorithm for Private Fair ERM
Open Source Code No The paper does not contain an explicit statement about the release of source code or a link to a code repository.
Open Datasets Yes We use four benchmark tabular datasets: Adult Income, Retired Adult, Parkinsons, and Credit-Card dataset from the UCI machine learning repository (Dua & Graff (2017)). ... UTK-Face dataset (Zhang et al., 2017)
Dataset Splits Yes We split each dataset in a 3:1 train:test ratio.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used for running experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies.
Experiment Setup Yes Batch size was 1024. We tuned the ℓ2 diameter of the projection set W and θ-gradient clipping threshold in r1, 5s in order to generate stable results with high privacy (i.e. low ϵ). Each model was trained for 200 epochs. ... Batch size was 64. ... learning rates for the descent and ascent, ηθ and ηw, remained constant during the optimization process and were chosen as 0.001 and 0.005 respectively. ... Each model was trained for 150 epochs.