CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
Authors: Chulin Xie, Minghao Chen, Pin-Yu Chen, Bo Li
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Practically, we conduct comprehensive experiments across a range of federated datasets, and provide the first benchmark for certified robustness against backdoor attacks in federated learning. |
| Researcher Affiliation | Collaboration | 1University of Illinois at Urbana-Champaign 2Zhejiang University 3IBM Research. Correspondence to: Chulin Xie <chulinx2@illinois.edu>, Pin-Yu Chen <pinyu.chen@ibm.com>, Bo Li <lbo@illinois.edu>. |
| Pseudocode | Yes | Algorithm 1 Federated averaging with parameters clipping and perturbing; Algorithm 2 Certification of parameters smoothing |
| Open Source Code | Yes | Our code is publicaly available at https://github.com/AI-secure/CRFL. |
| Open Datasets | Yes | We train the FL system following our CRFL framework with three datasets: Lending Club Loan Data (LOAN) (Kan, 2019), MNIST (Le Cun & Cortes, 2010), and EMNIST (Cohen et al., 2017). |
| Dataset Splits | No | The paper mentions 'training data' and 'test sets' but does not explicitly provide percentages, sample counts, or specific predefined splits for training, validation, and testing datasets to reproduce the experiment. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python version, library versions like PyTorch, TensorFlow, scikit-learn). |
| Experiment Setup | Yes | In all experiments, unless otherwise stated, we use σT = 0.01 to generate M = 1000 noisy models in parameter smoothing procedure, and use the error tolerance α = 0.001. |