Private Zeroth-Order Nonsmooth Nonconvex Optimization
Authors: Qinzi Zhang, Hoang Tran, Ashok Cutkosky
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We introduce a new zeroth-order algorithm for private stochastic optimization on nonconvex and nonsmooth objectives. Our algorithm satisfies (α, αρ2/2)-R enyi differential privacy (RDP) (Mironov, 2017) (which is approximately (ρ, γ)-DP) and finds a (δ, ϵ)-stationary point with O(dδ 1ϵ 3+d3/2ρ 1δ 1ϵ 2) data complexity. This paper presents a novel zeroth-order algorithm for private nonsmooth nonconvex optimization. |
| Researcher Affiliation | Academia | Qinzi Zhang, Hoang Tran & Ashok Cutkosky Department of Electrical and Computer Engineering Boston University Boston, MA, USA {qinziz,tranhp,cutkosky}@bu.edu |
| Pseudocode | Yes | Algorithm 1 Zeroth-order gradient oracle GRADf,δ(x, z1:b), Algorithm 2 Zeroth-order gradient difference oracle DIFFf,δ(x, y, z1:b), Algorithm 3 Online-to-Nonconvex Conversion, Algorithm 4 Private variance-reduced gradient oracle O, Algorithm 5 Tree Mechanism |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described, nor does it include specific repository links or explicit code release statements. |
| Open Datasets | No | The paper focuses on theoretical analysis and algorithm design and does not describe experiments using datasets. |
| Dataset Splits | No | The paper is theoretical and does not describe experiments with dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not describe experiments, therefore no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe experiments, therefore no specific ancillary software dependencies with version numbers are mentioned. |
| Experiment Setup | No | The paper focuses on theoretical analysis and algorithm design, and does not provide specific experimental setup details such as hyperparameters or training configurations. |