SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles

Authors: Cuong Tran, Keyu Zhu, Ferdinando Fioretto, Pascal Van Hentenryck

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluations on both tabular and image datasets show that SF-PATE not only achieves better accuracy, privacy, and fairness tradeoffs with respect to the current state of the art, but it is also significantly faster.
Researcher Affiliation Academia Cuong Tran1 , Keyu Zhu2 , Ferdinando Fioretto3 and Pascal Van Hentenryck2 1 Syracuse University 2 Georgia Institute of Technology 3 University of Virginia
Pseudocode No The paper describes algorithms (SFS-PATE, SFT-PATE) and their steps in text and mathematical equations, but it does not include a formally structured pseudocode or algorithm block.
Open Source Code No The paper does not contain an explicit statement about releasing source code for the described methodology or a link to a code repository.
Open Datasets Yes The evaluation is conducted using four UCI tabular datasets: Bank, Parkinson, Income and Credit Card [Blake, 1998], and UTKFace [Hwang et al., 2020], a vision dataset.
Dataset Splits No The paper mentions repeating experiments using random seeds and evaluating performance, but it does not explicitly provide specific train/validation/test dataset splits (percentages, sample counts, or detailed methodology) required for reproduction.
Hardware Specification No The paper does not explicitly describe the specific hardware used to run its experiments, such as GPU models, CPU types, or detailed system specifications.
Software Dependencies No The paper mentions using certain frameworks and models (e.g., Lagrangian dual scheme, Resnet 50), but it does not specify software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes To ensure a fair comparison, the experimental analysis uses the same architectures, model initialization θ, and parameters for all models (including the baselines models PF-LD and M)... A more detailed description of these approaches, the settings adopted, hyperparameters optimization, and the datasets is deferred to Appendix D.