Learner-Private Convex Optimization
Authors: Jiaming Xu, Kuang Xu, Dana Yang
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We show that, if the learner wants to ensure the probability of the adversary estimating accurately be kept below 1/L, then the overhead in query complexity is additive in L in the minimax formulation, but multiplicative in L in the Bayesian formulation. Compared to existing learner-private sequential learning models with binary feedback, our results apply to the significantly richer family of general convex functions with full-gradient feedback. Our proofs are largely enabled by tools from the theory of Dirichlet processes, as well as more sophisticated lines of analysis aimed at measuring the amount of information leakage under a full-gradient oracle. |
| Researcher Affiliation | Academia | 1The Fuqua School of Business, Duke University, Durham NC, USA 2Stanford Graduate School of Business, Stanford University, Stanford CA, USA. Correspondence to: Jiaming Xu <jiaming.xu868@duke.edu>, Kuang Xu <kuangxu@stanford.edu>, Dana Yang <xiaoqian.yang@duke.edu>. |
| Pseudocode | Yes | Algorithm 1 Querying Strategy under the Bayesian Setting |
| Open Source Code | No | The paper does not include any statement about releasing source code or provide a link to a code repository for the methodology described. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments with empirical datasets. It describes constructing a "prior distribution π" and using a "Dirichlet process" to model functions for theoretical analysis, which is not equivalent to providing access to a publicly available or open dataset for training. |
| Dataset Splits | No | The paper is theoretical and does not discuss empirical data splits (training, validation, test) for reproduction purposes. |
| Hardware Specification | No | The paper is theoretical and does not describe any empirical experiments, therefore no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe any empirical experiments, therefore no specific software dependencies with version numbers are mentioned. |
| Experiment Setup | No | The paper is theoretical and does not describe an empirical experimental setup with hyperparameters or training settings. |