Tractable MCMC for Private Learning with Pure and Gaussian Differential Privacy
Authors: Yingyu Lin, Yian Ma, Yu-Xiang Wang, Rachel Emily Redberg, Zhiqi Bu
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | D EXPERIMENTS D.1 Theoretical lower bounds D.2 Empirical Risks on Real Datasets Setup. We experiment on two real datasets: Red Wine Quality and White Wine Quality from UCI repository (https://archive.ics.uci.edu/dataset/186/wine+quality). |
| Researcher Affiliation | Collaboration | Yingyu Lin1 , Yi-An Ma1 , Yu-Xiang Wang2 , Rachel Redberg2, Zhiqi Bu3 1UC San Diego, 2UC Santa Barbara, 3Amazon AI |
| Pseudocode | Yes | Algorithm 1 Metropolis-adjusted Langevin algorithm (MALA) with constraint Algorithm 2 Approximate SAmple Perturbation (ASAP) Algorithm 3 End-to-End Localized ASAP Algorithm 4 Approximate Output Perturbation |
| Open Source Code | No | The paper does not explicitly state that source code for the described methodology is publicly available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We experiment on two real datasets: Red Wine Quality and White Wine Quality from UCI repository (https://archive.ics.uci.edu/dataset/186/wine+quality). |
| Dataset Splits | No | The paper mentions running experiments on specific datasets but does not explicitly detail the train/validation/test splits (e.g., percentages or counts) used for reproduction. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory specifications) used to run the experiments. |
| Software Dependencies | No | The paper mentions general software concepts related to machine learning (e.g., 'deep learning'), but does not specify particular software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, TensorFlow 2.x). |
| Experiment Setup | Yes | C PARAMETERS IN LOCALIZED-ASAP The choice for the algorithm for providing θ0 and the associated B parameter in Algorithm 3 is delicate. ... Table 2: The choice of γ, B, λ, θ0, , ρ when instantiating Algorithm 3 for pure DP or Gaussian DP learning. ... Table 3: Choices of step sizes, number of iterations, and maximum number of restarts in Algorithm 1 for pure DP and Gaussian DP. |