Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach
Authors: Cuong Tran, Ferdinando Fioretto, Pascal Van Hentenryck9932-9939
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The paper analyses the tension between accuracy, privacy, and fairness and the experimental evaluation illustrates the benefits of the proposed model on several prediction tasks. |
| Researcher Affiliation | Academia | Syracuse University 2 Georgia Institute of Technology |
| Pseudocode | Yes | Algorithm 1: Fair-Lagrangian Dual (F-LD) |
| Open Source Code | No | The paper does not provide an explicit statement or link to its own open-source code for the described methodology. |
| Open Datasets | Yes | This section studies the behavior of the proposed algorithm on several datasets, including Income, Bank, and Compas (Zafar et al. 2017a) datasets. |
| Dataset Splits | No | The paper mentions 'Training data' and 'mini-batch B', but it does not provide specific details on validation splits (e.g., percentages, sample counts, or explicit mention of a validation set). |
| Hardware Specification | Yes | The tests use a common laptop (Mac Book Air 2013, 1.7GHz, 8GB RAM) on the Bank dataset and are consistent for all the fairness notions adopted. |
| Software Dependencies | No | The paper mentions using 'JAX' for speedups, but it does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | PF-LD uses clipping bound values Cp = 10.0 and Cd = 5.0 and each experiment and configuration is repeated 10 times and presents average and standard deviation results. The privacy losses are set to ϵ = 1.0 and δ = 10-5, unless otherwise specified. Given the input dataset D, the optimizer step size α > 0, and the vector of step sizes s, the Lagrangian multipliers are initialized in line 1. The training is performed for a fixed number of T epochs. At each epoch k, the primal update step (lines 3 and 4) optimizes the model parameters θ using stochastic gradient descent over different mini-batches B D. |