Beyond the Calibration Point: Mechanism Comparison in Differential Privacy

Authors: Georgios Kaissis, Stefan Kolek, Borja Balle, Jamie Hayes, Daniel Rueckert

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Experiments
Researcher Affiliation Collaboration 1AI in Healthcare and Medicine and Institute of Radiology, Technical University of Munich, Germany 2Mathematical Foundations of AI, LMU Munich 3Google Deep Mind.
Pseudocode Yes B.3. -Divergence Implementation The following code listing implements the -divergence computation corresponding to the mechanisms in Figure 3 in Python.
Open Source Code No The paper includes a code listing in Appendix B.3 but does not explicitly state that this code is open-source or publicly released via a repository.
Open Datasets Yes Concretely, the authors calibrate seven CIFAR-10 training runs with different noise multipliers and numbers of steps while fixing the sampling rate to obtain models which all satisfy p8, 10 5q-DP.
Dataset Splits No The paper references parameters and validation accuracy from De et al. (2022) (e.g., 'validation accuracy of 72.6%') but does not explicitly describe the training/validation/test dataset splits used for its own experiments in the main text.
Hardware Specification No No specific hardware (GPU, CPU models, etc.) used for running experiments is mentioned in the paper.
Software Dependencies No The paper lists Python libraries like `scipy.stats` and `numpy` in its pseudocode but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes We compare two SGMs M, Ă M with σ 2, rσ 3, p rp 9 10 4, N 1.4 106 and r N 3.4 106.