FairProof : Confidential and Certifiable Fairness for Neural Networks
Authors: Chhavi Yadav, Amrita Roy Chowdhury, Dan Boneh, Kamalika Chaudhuri
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We implement Fair Proof in Gnark and demonstrate empirically that our system is practically feasible. ... In this section we evaluate the performance of Fair Proof empirically. |
| Researcher Affiliation | Academia | 1University of California, San Diego 2Stanford University. |
| Pseudocode | Yes | Algorithm 1 Individual Fairness Certification, Algorithm 2 Geocert, Algorithm 3 Reduce Poly Dim, Algorithm 4 Fair Proof: Verifiable Individual Fairness Certification, Algorithm 5 Verify Polytope, Algorithm 6 Verify Distance, Algorithm 7 Verify Neighbor, Algorithm 8 Verify Boundary, Algorithm 9 Verify Order, Algorithm 10 Verify Min, Algorithm 11 Verify Inference. |
| Open Source Code | Yes | Code is available at https://github.com/ infinite-pursuits/Fair Proof. |
| Open Datasets | Yes | We use three standard fairness benchmarks. Adult (Becker & Kohavi, 1996) ... Default Credit (Yeh, 2016) ... German Credit (Hofmann, 1994). |
| Dataset Splits | No | The paper states 'randomly sample 100 test data points as input queries' but does not specify the overall training, validation, and test splits for the datasets, such as percentages or specific counts for each split. |
| Hardware Specification | Yes | We run all our code for Fair Proof without any multithreading or parallelism, on an Intel-i9 CPU chip with 28 cores. |
| Software Dependencies | Yes | Fair Proof is implemented using the Gnark (Botrel et al., 2023) zk-SNARK library in Go Lang. ... Botrel, G., Piellard, T., Housni, Y. E., Kubjas, I., and Tabaie, A. Consensys/gnark: v0.9.0, February 2023. |
| Experiment Setup | Yes | We train fully-connected Re LU networks with stochastic gradient descent in Py Torch. Our networks have 2 hidden layers with different sizes including (4, 2), (2, 4) and (8, 2). All the dataset features are standardized (Sta). ... we vary regularization by changing the weight decay parameter in Py Torch. |