Revisiting Differentially Private ReLU Regression

Authors: Meng Ding, Mingxi Lei, Liyang Zhu, Shaowei Wang, Di Wang, Jinhui Xu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on synthetic and real-world datasets also validate our results.
Researcher Affiliation Academia Meng Ding University at Buffalo Mingxi Lei University at Buffalo Liyang Zhu KAUST Shaowei Wang Guangzhou University Di Wang KAUST Jinhui Xu University at Buffalo
Pseudocode Yes Algorithm 1 DP-GLMtron, Algorithm 2 DP-Threshold, Algorithm 3 DP-TAGLMtron, Algorithm 4 DP-Tree-Aggregation
Open Source Code No All data and code will be released after acceptance.
Open Datasets Yes Additionally, due to space constraints, we present a real-data experiment on the MNIST dataset in Appendix B to demonstrate the performance of our proposed method.
Dataset Splits No The paper does not explicitly provide training/validation/test dataset splits. It mentions varying sample sizes (N) for experiments but does not detail how the data was partitioned for validation purposes.
Hardware Specification No The paper states 'The dimensionality of the data was fixed at 1,024' which refers to data characteristics, not hardware. The NeurIPS checklist indicates compute resources are in the appendix, but the appendix does not specify any particular hardware (GPU/CPU models, memory, etc.) used for experiments.
Software Dependencies No The paper mentions algorithms like DP-SGD, DP-FTRL, DP-GLMtron, and DP-TAGLMtron, and uses concepts like ReLU, Gaussian mechanism, and binary tree. However, it does not specify any software dependencies (e.g., libraries, frameworks, or operating systems) with version numbers.
Experiment Setup Yes The experiments were designed with varying privacy budgets (ε) set at 0.05, 0.2, and 0.5 with δ = 1 n1.1... The learning rate was initially set to 10 2, with N representing the sample size, which varied from 50 to 550 and increased in steps to 100. ... The dimensionality of the data was fixed at 1,024... The algorithms underwent a single iteration over generated data. ... For MNIST, the learning rate was initially set to 0.05, with N representing the sample size, which varied from 0 to 1000 and increased in steps to 100.