Faster Differentially Private Convex Optimization via Second-Order Methods
Authors: Arun Ganesh, Mahdi Haghifam, Thomas Steinke, Abhradeep Guha Thakurta
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We theoretically and empirically study the performance of our algorithm. Empirical results show our algorithm consistently achieves the best excess loss compared to other baselines and is 10-40 faster than DP-GD/DP-SGD for challenging datasets. |
| Researcher Affiliation | Collaboration | Arun Ganesh Google Research Mahdi Haghifam University of Toronto, Vector Institute Thomas Steinke Google Deep Mind Abhradeep Thakurta Google Deep Mind |
| Pseudocode | Yes | Algorithm 1 Meta Algorithm ... Algorithm 2 DPSolver ... Algorithm 3 Newton Method with Double noise |
| Open Source Code | No | The paper does not provide an explicit link to open-source code for the methodology it describes. It does reference a benchmark repository that includes AOP, which is a baseline, but not the code for their own method. |
| Open Datasets | Yes | We conducted experiments on six publicly available datasets: a1a, Adult, covertype, synthetic, fashion-MNIST, and protein (Appendix C includes fashion-MNIST and protein results). ... DG17] D. Dua and C. Graff. UCI Machine Learning Repository. 2017. |
| Dataset Splits | No | The paper mentions training, but does not provide specific details on validation splits or how they were used beyond general statements about evaluation. |
| Hardware Specification | No | The paper states, "Our experiments are run on CPU." but does not specify any particular CPU model, memory, or other detailed hardware specifications. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies used in the experiments (e.g., Python, PyTorch, TensorFlow, etc.). |
| Experiment Setup | No | The paper discusses algorithmic details and parameters like noise scales and stepsizes, but it does not provide a comprehensive list of concrete experimental setup details such as specific learning rates, batch sizes (for full batch, only mentioned for minibatch variant), epochs, or optimizer settings for all experiments. It states, "For brevity, many of the details behind our implementation and more experimental results are deferred to Appendix C." but Appendix C also does not contain detailed hyperparameters. |