Private Stochastic Convex Optimization and Sparse Learning with Heavy-tailed Data Revisited
Authors: Youming Tao, Yulian Wu, Xiuzhen Cheng, Di Wang
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we revisit the problem of Differentially Private Stochastic Convex Optimization (DPSCO) with heavy-tailed data... We propose a novel robust and private mean estimator which is optimal. Based on its idea, we then extend to the general d-dimensional space and study DP-SCO... We also provide lower bounds... We propose a new method and show it is also optimal... |
| Researcher Affiliation | Academia | Youming Tao1 , Yulian Wu2 , Xiuzhen Cheng1 and Di Wang2 1School of Computer Science, Shandong University 2CEMSE, KAUST di.wang@kaust.edu.sa |
| Pseudocode | Yes | Algorithm 1 Truncation Based DP Mean Estimator: DPODME Tϵ,δ,ξ(X) |
| Open Source Code | No | The paper does not include any explicit statement about releasing source code for the described methodology, nor does it provide any links to a code repository. |
| Open Datasets | No | The paper is theoretical and does not mention specific datasets or their public availability for training purposes. It refers to "data samples X" without specifying a publicly accessible dataset source or name. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments, therefore it does not provide specific dataset split information for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not describe experiments, thus no specific hardware details used for running experiments are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe experiments, thus no specific ancillary software details with version numbers are provided. |
| Experiment Setup | No | The paper is theoretical and focuses on algorithm design and proofs, therefore it does not provide specific experimental setup details, hyperparameters, or training configurations. |