Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Bridging Differential Privacy and Byzantine-Robustness via Model Aggregation

Authors: Heng Zhu, Qing Ling

IJCAI 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide numerical experiments to verify the effectiveness of DP-RSA on MNIST and CIFAR10 datasets, respectively.
Researcher Affiliation Academia 1Sun Yat-sen University 2University of California, San Diego
Pseudocode Yes Algorithm 1 DP-RSA
Open Source Code Yes The code is available at https://github.com/oyhah/DP-RSA
Open Datasets Yes For MNIST, we train a two-layer neural network... For CIFAR10, we train a convolutional neural network (CNN) model...
Dataset Splits No The paper describes training sample distribution but does not specify a separate validation dataset or its size/percentage for reproducibility.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not mention specific software dependencies with version numbers (e.g., Python 3.x, PyTorch x.y).
Experiment Setup Yes The regularization term is f0( x) = 0.002 x 2. The penalty parameter λ is set to 0.01 and the step size αt is set to be constant as α = 0.01. [...] The privacy loss ϵ is set to 0.2, 0.4 and 1.38.