Improving the Privacy and Accuracy of ADMM-Based Distributed Algorithms

Authors: Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Numerical Experiments We use the same dataset as (Zhang & Zhu, 2017), i.e., the Adult dataset from the UCI Machine Learning Repository (Lichman, 2013). It consists of personal information of around 48,842 individuals... Figures 2(a)-2(b) show both Lmean(t) and Lrange(t) as vertical bars centered at Lmean(t). Their corresponding privacy upper bound is given in Figures 2(c)-2(d).
Researcher Affiliation Academia 1Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, Michigan, USA.
Pseudocode Yes Algorithm 1 Penalty perturbation (PP) method Parameter: Determine θ such that 2c1 < Bi N + 2θVi) holds for all i. Initialize: Generate fi(0) randomly and λi(0) = 0d 1 for every node i N , t = 0 Input: {Di}N i=1, {αi(1), , αi(T)}N i=1 for t = 0 to T 1 do...
Open Source Code No The paper does not provide any specific links to source code repositories, nor does it explicitly state that the code is publicly available.
Open Datasets Yes We use the same dataset as (Zhang & Zhu, 2017), i.e., the Adult dataset from the UCI Machine Learning Repository (Lichman, 2013).
Dataset Splits No The paper uses the Adult dataset and mentions a "training set" and "test set" implicitly through evaluation metrics, but does not explicitly specify a "validation" set or detailed dataset splits (e.g., percentages or counts) for reproduction.
Hardware Specification No The paper does not explicitly describe the specific hardware used (e.g., GPU models, CPU types, or cloud infrastructure details) for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes We will use as loss function the logistic loss L (z) = log(1 + exp( z)), with |L | 1 and L c1 = 1 4. The regularizer is R(fi) = 1 2||fi||2 2. ... for simplicity of presentation we shall fix θ = 0.5, let ηi(t) = η(t) = θqt 1 1 , and noise αi(t) = α(t) = α(1)qt 1 2 for all nodes.