Data-Dependent Differentially Private Parameter Learning for Directed Graphical Models

Authors: Amrita Roy Chowdhury, Theodoros Rekatsinas, Somesh Jha

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare our algorithm with a standard data-independent approach over a diverse suite of benchmarks and demonstrate that our solution requires a privacy budget that is roughly 3 smaller to obtain the same or higher utility. ... 5. Evaluation We evaluate the utility of the DGM learned via our algorithm by studying the following three questions: (1) Does our scheme lead to low error estimation of the DGM parameters? (2) Does our scheme result in low error inference query responses? (3) How does our scheme fare against data-independent approaches?
Researcher Affiliation Collaboration 1Department of Computer Sciences, University of Wisconsin Madison, USA 2Xai Pient, USA.
Pseudocode Yes Algorithm 1 Differentially private learning of the parameters of a directed graphical model; Procedure 1 Compute Parameters
Open Source Code No The paper does not provide a direct link or explicit statement about the availability of its own source code for the described methodology. It cites external data and BN repositories.
Open Datasets Yes We evaluate our proposed scheme on four benchmark DGMs (BN) namely Asia, Sachs, Child and Alarm. For all four DGMs, the evaluation is carried out on corresponding synthetic data sets (D1, 2014; D2) with 10K records each. These data sets are standard benchmarks for evaluating DGM inferencing and are derived from real-world use cases (BN). ... Bn repository. URL http://www.bnlearn.com/bnrepository/. Data repository. URL https://www.ccd.pitt.edu/wiki/index.php/Data_Repository.
Dataset Splits No The paper does not explicitly describe train/validation/test dataset splits. It mentions using "synthetic data sets... with 10K records each" and running "20 random inference queries" for evaluation.
Hardware Specification No The paper states: "All the experiments have been implemented in Python". It does not provide any specific hardware details such as GPU/CPU models, memory, or cloud instance types.
Software Dependencies No The paper only mentions "implemented in Python". It does not provide specific version numbers for Python or any other software libraries, packages, or solvers used.
Experiment Setup Yes All the experiments have been implemented in Python and we set e I = 0.1 e B, β = 0.1.