Collect at Once, Use Effectively: Making Non-interactive Locally Private Learning Possible
Authors: Kai Zheng, Wenlong Mou, Liwei Wang
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we propose several efficient algorithms for learning and estimation problems under non-interactive LDP model, with good theoretical guarantees. |
| Researcher Affiliation | Academia | 1Key Laboratory of Machine Perception, MOE, School of EECS, Peking University, Beijing, China. Correspondence to: Kai Zheng <zhengk92@pku.edu.cn>, Wenlong Mou <mouwenlong@pku.edu.cn>, Liwei Wang <wanglw@cis.pku.edu.cn>. |
| Pseudocode | Yes | Algorithm 1 Basic Private Vector mechanism ... Algorithm 2 LDP ℓ1 Constrained Mean Estimation ... Algorithm 3 LDP ℓ1 Constrained Linear Regression ... Algorithm 4 LDP kernel mechanism ... Algorithm 5 LDP SGLD Mechanism Collection ... Algorithm 6 LDP SGLD Mechanism Learning |
| Open Source Code | No | The paper does not include any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments on a specific publicly available dataset. It defines problem settings with data properties (e.g., 'distribution D supported on B(0, 1)') but does not provide access information for empirical data. |
| Dataset Splits | No | The paper is theoretical and does not present empirical experiments, therefore it does not provide training/test/validation dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not describe any experimental setup or specific hardware used for running experiments. |
| Software Dependencies | No | The paper is theoretical and does not describe any specific software dependencies with version numbers for replication. |
| Experiment Setup | No | The paper is theoretical and does not include details about an experimental setup such as hyperparameters or training configurations. |