Differentially Private Learning of Undirected Graphical Models Using Collective Graphical Models
Authors: Garrett Bernstein, Ryan McKenna, Tao Sun, Daniel Sheldon, Michael Hay, Gerome Miklau
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct a number of experiments on synthetic and real data to evaluate the quality of models learned by both Naive MLE and CGM. |
| Researcher Affiliation | Academia | 1University of Massachusetts Amherst 2Mount Holyoke College 3Colgate University. Correspondence to: Garrett Bernstein <gbernstein@cs.umass.edu>. |
| Pseudocode | Yes | Algorithm 1 Non-Linear Belief Propagation (NLBP) ... Algorithm 2 EM for CGMs |
| Open Source Code | No | No statement regarding the release or availability of open-source code for the described methodology was found. |
| Open Datasets | No | The paper uses synthetic data and human mobility data, but provides no concrete access information (links, DOIs, repository names, or citations with author/year) for either dataset. |
| Dataset Splits | No | The paper mentions reserving 25% of individuals' data for testing but does not specify training/validation/test splits with exact percentages, sample counts, or a clear splitting methodology for reproduction. |
| Hardware Specification | No | No specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for running experiments are mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) are provided. |
| Experiment Setup | No | The paper mentions data preprocessing steps and that PSGD was tuned via grid search, but it does not provide specific hyperparameter values, training configurations, or system-level settings for the models. |