Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Near-Exact Privacy Amplification for Matrix Mechanisms
Authors: Christopher Choquette-Choo, Arun Ganesh, Saminul Haque, Thomas Steinke, Abhradeep Guha Thakurta
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we show this lets us achieve smaller RMSE on prefix sums than the previous state-of-the-art (SOTA). We also show that we can improve on the SOTA performance on deep learning tasks. |
| Researcher Affiliation | Collaboration | Christopher A. Choquette-Choo, Thomas Steinke, & Abhradeep Thakurta Google Deep Mind Mountain View, CA, 94043, USA EMAIL Arun Ganesh Google Research Seattle, WA, 98103, USA EMAIL Saminul Haque Department of Computer Science Stanford University Stanford, CA, 94305, USA EMAIL |
| Pseudocode | Yes | Algorithm 1 Estimate-Verify-Release of (Wang et al., 2023) Algorithm 2 Finding σ |
| Open Source Code | No | The paper references third-party tools like "Google's differential privacy libraries." and "Opacus" but does not provide specific access to source code for the methodology described in this paper. |
| Open Datasets | Yes | Empirical evaluation on CIFAR-10: We next use correlation matrices and σ computed using our privacy analysis to train a VGG model on CIFAR-10. |
| Dataset Splits | Yes | We replicate the CIFAR10 image recognition setting considered by (Choquette-Choo et al., 2024a): we also use the same VGG model, 20 epochs of 100 iterations with batch size 500, and momentum of 0.95 and a learning rate cooldown from η to η/20 across iterations 500 to 2000. |
| Hardware Specification | Yes | We use a v100 GPU to perform the gradient steps. |
| Software Dependencies | No | The paper mentions software like Python, PyTorch, TensorFlow, JAX, and XLA, but does not specify any version numbers for these components or libraries. |
| Experiment Setup | Yes | We replicate the CIFAR10 image recognition setting considered by (Choquette-Choo et al., 2024a): we also use the same VGG model, 20 epochs of 100 iterations with batch size 500, and momentum of 0.95 and a learning rate cooldown from η to η/20 across iterations 500 to 2000. We vary ε {0.5, 1.0, 2.0, 4.0, 8.0}. We fix the clip norm to 1.0 and tune the learning rate separately for each com-bination of ε and correlation matrix/amplification method, and then report the average of 100 training runs for each combination. |