Improving Federated Learning Face Recognition via Privacy-Agnostic Clusters

Authors: Qiang Meng, Feng Zhou, Hainan Ren, Tianshu Feng, Guochao Liu, Yuanqing Lin

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section together with the appendix describes extensive experiments on challenging benchmarks to illustrate the superiority of Privacy Face in training face recognition with privacy guarantee. Datasets. Training. Performances on Benchmarks. Tab. 1 presents verification performances on various benchmarks.
Researcher Affiliation Collaboration Qiang Meng1, Feng Zhou1, Hainan Ren1, Tianshu Feng2, Guochao Liu1, Yuanqing Lin1 1Algorithm Research, Aibee Inc. 2 Independent Researcher
Pseudocode Yes Algorithm 1: Differentially Private Local Clustering (DPLC) ... Algorithm 2: The Privacy Face Training Scheme.
Open Source Code No The paper mentions an open-source pre-trained model used in their work ('https://github.com/Irving Meng/Mag Face'), but does not provide concrete access to the source code for the methodology described in this paper. The reproducibility statement discusses experimental details but not code release.
Open Datasets Yes Datasets. CASIA-Web Face (Yi et al., 2014) contains 0.5M images from 10K celebrities and serves as the dataset for pre-training. BUPT-Balancedface (Wang & Deng, 2020), which comprises four sub-datasets... is used to simulate the federated setting.
Dataset Splits No The paper states that BUPT-Balancedface is used to simulate the federated setting (training), and RFW, IJB-B, and IJB-C are used for evaluation (testing). However, it does not explicitly provide details for a separate validation split, nor does it specify how the training data itself is split into training and validation subsets for hyperparameter tuning.
Hardware Specification Yes For reproducibility and fair comparison, all models are trained on 8 1080Ti GPUs with a fixed seed.
Software Dependencies No The paper mentions various models and optimizers like 'ResNet18', 'SGD', 'Arc Face', 'Cos Face', and 'Adam optimizer', but it does not specify version numbers for any software libraries or frameworks (e.g., PyTorch version, Python version).
Experiment Setup Yes We finetune φ0 by SGD for M = 10 communication rounds on BUPT-Balancedface, with learning rate 0.001, batch size 512 and weight decay 5e-4. ... Unless stated otherwise, the parameters for Privacy Face are default to T = 512, Q = 1, ρ = 1.3 and ϵ = 1.