Post-processing of Differentially Private Data: A Fairness Perspective

Authors: Keyu Zhu, Ferdinando Fioretto, Pascal Van Hentenryck

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The theoretical analysis is complemented with numerical simulations on Census data.
Researcher Affiliation Academia 1Georgia Institute of Technology 2Syracuse University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described. There are no repository links or explicit code release statements.
Open Datasets No The paper mentions using 'US Census data' and '2010 US census release' but does not provide concrete access information (specific link, DOI, repository name, or formal citation with authors/year) for the specific dataset instances used in their simulations, nor does it refer to an established benchmark dataset with explicit access details.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes The experiments use the Laplace mechanism with parameter λ 10 and the Gaussian mechanism with parameter σ 25. The empirical studies of α-fairness and its bounds in Theorem 1 associated with the post-processing mechanism πS over 1, 000, 000 independent runs are reported in Table 1.