Privacy Amplification by Iteration for ADMM with (Strongly) Convex Objective Functions
Authors: T-H. Hubert Chan, Hao Xie, Mengshi Zhao
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Despite being primarily theoretical, we conduct experiments on a general Lasso problem. Specifically, we empirically examine the effects of strong convexity and privacy noise magnitude on convergence rates. |
| Researcher Affiliation | Academia | T-H. Hubert Chan*, Hao Xie*, Mengshi Zhao* Department of Computer Science, The University of Hong Kong hubert@cs.hku.hk, hxie@connect.hku.hk, zmsxsl@connect.hku.hk |
| Pseudocode | Yes | Algorithm 1: One ADMM Iteration. Algorithm 2: Mechanism M1. Algorithm 3: Mechanism M2. |
| Open Source Code | No | The paper does not include an unambiguous statement that the authors are releasing the code for the work described in this paper, nor does it provide a direct link to a source-code repository. |
| Open Datasets | No | The paper mentions conducting 'experiments on a general Lasso problem' and a 'numerical illustration of our algorithms on a general Lasso problem'. This describes the problem type but does not specify a concrete, publicly available dataset with a link, DOI, or citation. |
| Dataset Splits | No | The paper does not provide specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or references to predefined splits). |
| Hardware Specification | No | The paper does not explicitly describe any specific hardware specifications (e.g., GPU/CPU models, memory amounts, or detailed cloud/cluster configurations) used for running experiments. |
| Software Dependencies | No | The paper does not list specific software components with their version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA x.x, or specific solver versions). |
| Experiment Setup | No | The paper does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed system-level training settings. It only generally refers to 'details are given in the full version' for experimental results. |