Improved Convergence of Differential Private SGD with Gradient Clipping
Authors: Huang Fang, Xiaoyun Li, Chenglin Fan, Ping Li
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on standard benchmark datasets are conducted to support our analysis. |
| Researcher Affiliation | Industry | Cognitive Computing Lab Baidu Research No.10 Xibeiwang East Road, Beijing 100193, China 10900 NE 8th St. Bellevue, Washington 98004, USA {fangazq877,lixiaoyun996,fanchenglin,pingli98}@gmail.com |
| Pseudocode | Yes | Algorithm 1 Differential-private SGD with gradient clipping (DP-SGD-GC) |
| Open Source Code | No | The paper does not provide a specific link or explicit statement about releasing its own source code for the methodology described. |
| Open Datasets | Yes | We conduct experiments on two standard image classification benchmark datasets: MNIST (Le Cun, 1998) and CIFAR10 (Krizhevsky & Hinton, 2009). |
| Dataset Splits | No | The paper mentions training and testing accuracy but does not specify a validation dataset split. |
| Hardware Specification | Yes | All experiments are conducted on a server with 4 CPUs and one NVIDIA Tesla P100 GPU. |
| Software Dependencies | No | The paper mentions 'Paddle Paddle' and 'Opacus package' but does not specify version numbers for these or other software dependencies. |
| Experiment Setup | Yes | For all experiments, we set the batch size B = 128, the noise level σ = 1.0 and the confidence level δ = 10^-5. For MNIST, we try learning rate in {2e-3, 5e-3, 1e-2} for each experiment and report the best result. For CIFAR10 we fix the learning rate to be 0.1. |