Privacy-Preserving Face Recognition in the Frequency Domain
Authors: Yinggui Wang, Jian Liu, Man Luo, Le Yang, Li Wang2558-2566
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments In this section, we first evaluate the proposed analysis network for trade-off analysis between privacy and accuracy. Performance comparisons of different algorithms over standard face datasets are carried out, following by attacking experiments and discussions for PPFR-FD. |
| Researcher Affiliation | Collaboration | Yinggui Wang 1, Jian Liu1, Man Luo1, Le Yang2, Li Wang1 1Ant Group, 2University of Canterbury |
| Pseudocode | No | The paper describes methods in textual paragraphs and uses schematic diagrams (e.g., Fig. 2) to illustrate processes, but it does not contain any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code or provide links to a code repository. |
| Open Datasets | Yes | We use the MS-Celeb-1M dataset with 3,648,176 images from 79,891 subjects as the training set. |
| Dataset Splits | No | The paper mentions using MS-Celeb-1M, CASIA, and LFW as training and test datasets, and 7 benchmarks for evaluation, but it does not explicitly define or specify a validation set or its split details. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for running experiments, such as GPU models, CPU specifications, or cloud computing instances. |
| Software Dependencies | No | The paper mentions using Mobile Net V2, Arc Face loss, Res Net50, SE-blocks, and SGD optimizer, but it does not provide specific version numbers for any of the software dependencies or libraries used. |
| Experiment Setup | Yes | All models are trained for 50 epochs using the SGD optimizer with the momentum of 0.9, weight decay of 0.0001. For the threshold γ in (1), we set it to 0.3. λ in (2) is set to 1. We train the baseline model on Res Net50 backbone with SE-blocks and batch size of 512. |