Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy Constraints

Authors: Justin Whitehouse, Aaditya Ramdas, Steven Z. Wu, Ryan M. Rogers

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically evaluate the Brownian mechanism and Reduced Above Threshold in Section 6, finding that the Brownian mechanism can offer privacy loss savings over the Laplace noise reduction method introduced by Ligett et al. [2017].
Researcher Affiliation Collaboration Justin Whitehouse Carnegie Mellon University jwhiteho@andrew.cmu.edu Zhiwei Steven Wu Carnegie Mellon University zstevenwu@cmu.edu Aaditya Ramdas Carnegie Mellon University aramdas@cmu.edu Ryan Rogers Linked In rrogers@linkedin.com
Pseudocode Yes Algorithm 1 Reduced Above Threshold (via Laplace Noise Reduction)
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology.
Open Datasets Yes For logistic regression, we leveraged the KDD-99 dataset [KDD, 1999] with d = 38 features... For ridge regression, we used the Twitter dataset [Kawala et al., 2013] with d = 77 features...
Dataset Splits No The paper does not provide specific train/validation/test dataset splits, such as percentages or counts. It mentions using 'training data as a held-out dataset' for evaluation, but this is not a standard validation split.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications, or cloud instance types) used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup Yes We discuss the specific parameter settings for these experiments in Appendix E. For the logistic regression experiments, we used a regularization parameter of 1e-4 and a step size of 0.1 for both BM and LNR. We used a learning rate schedule that decayed by a factor of 10 every 50 epochs. For the ridge regression experiments, we used a regularization parameter of 0.1.