Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Stochastic Differentially Private and Fair Learning
Authors: Andrew Lowy, Devansh Gupta, Meisam Razaviyayn
ICLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our numerical experiments show that the proposed algorithm consistently offers significant performance gains over the state-of-the-art baselines, and can be applied to larger scale problems with non-binary target/sensitive attributes. |
| Researcher Affiliation | Academia | Andrew Lowy University of Southern California EMAIL Devansh Gupta Indraprastha Institute of Information Technology, Delhi EMAIL Meisam Razaviyayn University of Southern California EMAIL |
| Pseudocode | Yes | Algorithm 1 DP-FERMI Algorithm for Private Fair ERM |
| Open Source Code | No | The paper does not contain an explicit statement about the release of source code or a link to a code repository. |
| Open Datasets | Yes | We use four benchmark tabular datasets: Adult Income, Retired Adult, Parkinsons, and Credit-Card dataset from the UCI machine learning repository (Dua & Graff (2017)). ... UTK-Face dataset (Zhang et al., 2017) |
| Dataset Splits | Yes | We split each dataset in a 3:1 train:test ratio. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | Batch size was 1024. We tuned the ℓ2 diameter of the projection set W and θ-gradient clipping threshold in r1, 5s in order to generate stable results with high privacy (i.e. low ϵ). Each model was trained for 200 epochs. ... Batch size was 64. ... learning rates for the descent and ascent, ηθ and ηw, remained constant during the optimization process and were chosen as 0.001 and 0.005 respectively. ... Each model was trained for 150 epochs. |