Learning Certified Individually Fair Representations
Authors: Anian Ruoss, Mislav Balunovic, Marc Fischer, Martin Vechev
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental evaluation on five real-world datasets and several fairness constraints demonstrates the expressivity and scalability of our approach. |
| Researcher Affiliation | Academia | Anian Ruoss, Mislav Balunovi c, Marc Fischer, Martin Vechev Department of Computer Science ETH Zurich |
| Pseudocode | No | The paper describes algorithms and procedures in paragraph text, but does not include formal pseudocode blocks or algorithms labeled as such. |
| Open Source Code | Yes | An end-to-end implementation of our method in an open-source tool called LCIFR, together with an extensive evaluation on several datasets, constraints, and architectures. We make LCIFR publicly available at https://github.com/eth-sri/lcifr. |
| Open Datasets | Yes | We consider a variety of different datasets Adult [55], Compas [56], Crime [55], German [55], Health (https://www.kaggle.com/c/hhp), and Law School [57] |
| Dataset Splits | No | The paper states that data is 'split into train, test and validation sets' and that a 'grid search over model architectures and loss balancing factors γ' is evaluated 'on the validation set'. However, it does not specify the percentages, sample counts, or the exact methodology of these splits needed for reproduction. |
| Hardware Specification | Yes | We perform all experiments on a desktop PC using a single Ge Force RTX 2080 Ti GPU and 16-core Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz. |
| Software Dependencies | No | The paper mentions software like Pytorch and optimization methods like Adam, but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | Experiment setup We model the encoder fθ as a neural network, and we use logistic regression as a classifier hψ. We perform a grid search over model architectures and loss balancing factors γ which we evaluate on the validation set. As a result, we consider fθ with 1 hidden layer of 20 neurons (except for Law School where we do not have a hidden layer) and a latent space of dimension 20. We fix γ to 10 for Adult, Crime, and German, to 1 for Compas and Health, and to 0.1 for Law School. We provide a more detailed overview of the model architectures and hyperparameters in Appendix G. |