FLAME: Differentially Private Federated Learning in the Shuffle Model
Authors: Ruixuan Liu, Yang Cao, Hong Chen, Ruoyang Guo, Masatoshi Yoshikawa8688-8696
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on realworld dataset validate that SS-Topk improves the testing accuracy by 60.7% than the local model based FL. |
| Researcher Affiliation | Academia | 1Renmin University of China 2Kyoto University |
| Pseudocode | Yes | Algorithm 2 FLAME: Encoding, Shuffling, Analyzing. |
| Open Source Code | No | The paper does not include an explicit statement or link indicating that the source code for the methodology described is publicly available. |
| Open Datasets | Yes | Evaluations We evaluate the learning performance on MNIST dataset and logistic regression model with d = 7850, n = 1000. |
| Dataset Splits | No | The paper mentions using the “MNIST dataset” and evaluating “testing accuracy” but does not provide specific details on how the dataset was split into training, validation, and test sets (e.g., exact percentages or sample counts). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper mentions applying the “Laplace Mechanism” as a basic randomizer but does not specify any software dependencies (libraries, frameworks, or languages) with version numbers. |
| Experiment Setup | Yes | For SS-Simple, SS-Double, SS-Topk, we apply the Laplace Mechanism as the basic randomizer R for each dimension. Given ϵl = 78.5, the spilt privacy budget of each dimension is ϵld = 0.01 for SS-Simple and ϵlk = 0.5 for SS-Double and SS-Topk. Evaluations We evaluate the learning performance on MNIST dataset and logistic regression model with d = 7850, n = 1000. |