On the Fairness Impacts of Private Ensembles Models
Authors: Cuong Tran, Ferdinando Fioretto
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The theoretical results presented in the following sections are supported and corroborated by empirical evidence from tabular datasets (UCI Adults, Credit card, Bank, and Parkinsons) and an image dataset (UTKFace). |
| Researcher Affiliation | Academia | Cuong Tran1 and Ferdinando Fioretto2 1 Syracuse University 2 University of Virginia |
| Pseudocode | No | The paper does not contain any explicitly labeled "Pseudocode" or "Algorithm" blocks. |
| Open Source Code | No | The paper mentions supplementary material for "additional experiments" and "proofs" in [Tran and Fioretto, 2023], but does not explicitly state that the source code for the described methodology or proposed solution is publicly available. |
| Open Datasets | Yes | The theoretical results presented in the following sections are supported and corroborated by empirical evidence from tabular datasets (UCI Adults, Credit card, Bank, and Parkinsons) and an image dataset (UTKFace). ... extended experiments and more detailed descriptions of the datasets can be found in Appendix D of [Tran and Fioretto, 2023]. |
| Dataset Splits | No | The paper mentions using datasets and running experiments with "100 repetitions" but does not provide specific train/validation/test dataset splits (e.g., percentages, sample counts) in the main text, deferring more details to an appendix. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper describes model architectures like "feed-forward networks" and "CNNs" but does not provide a list of specific software dependencies with their version numbers (e.g., Python, PyTorch, TensorFlow versions) that would be needed for replication. |
| Experiment Setup | Yes | These results were obtained using feed-forward networks with two hidden layers and nonlinear Re LU activations for both the ensemble and student models for tabular data, and CNNs for image data. All reported metrics are the average of 100 repetitions used to compute empirical expectations and report 0/1 losses, which capture the concept of accuracy parity. ... A detailed description of the experimental settings can be found in Appendix D, and the proofs of all theorems are included in Appendix A of [Tran and Fioretto, 2023]. |