Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
SlerpFace: Face Template Protection via Spherical Linear Interpolation
Authors: Zhizhou Zhong, Yuxi Mi, Yuge Huang, Jianqing Xu, Guodong Mu, Shouhong Ding, Jingyun Zhang, Rizen Guo, Yunsheng Wu, Shuigeng Zhou
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that Slerp Face provides satisfactory recognition accuracy and comprehensive protection against inversion and other attack forms, superior to prior arts. |
| Researcher Affiliation | Collaboration | 1 Fudan University 2 Youtu Lab, Tencent 3 We Chat Pay Lab33, Tencent |
| Pseudocode | No | No explicit pseudocode or algorithm blocks are found in the paper. |
| Open Source Code | No | The paper does not provide an explicit statement or a link to open-source code for the described methodology. |
| Open Datasets | Yes | We employ an IR-50 model, trained on the MS1Mv2 (Guo et al. 2016) dataset... Evaluation is done on 5 regular-size datasets, LFW (Learned-Miller 2014), CFPFP (Sengupta et al. 2016), Age DB (Moschoglou et al. 2017), CPLFW (Zheng and Deng 2018), and CALFW (Zheng, Deng, and Hu 2017), and 2 large-scale datasets, IJBB (Whitelam et al. 2017) and IJB-C (Maze et al. 2018). |
| Dataset Splits | Yes | Evaluation is done on 5 regular-size datasets, LFW (Learned-Miller 2014), CFPFP (Sengupta et al. 2016), Age DB (Moschoglou et al. 2017), CPLFW (Zheng and Deng 2018), and CALFW (Zheng, Deng, and Hu 2017), and 2 large-scale datasets, IJBB (Whitelam et al. 2017) and IJB-C (Maze et al. 2018). |
| Hardware Specification | No | We employ an IR-50 model, trained on the MS1Mv2 (Guo et al. 2016) dataset on 8 GPUs in parallel with Arc Face loss as Lfr, as the FR backbone... The last two columns in Tab. 1 show the average enrollment (to register into a database) and matching (to match once with the database) time (ms) for a single template on a personal laptop, highlighting Slerp Face s advantage. |
| Software Dependencies | No | We employ an IR-50 model, trained on the MS1Mv2 (Guo et al. 2016) dataset on 8 GPUs in parallel with Arc Face loss as Lfr, as the FR backbone. We train the model for 24 epochs using a stochastic gradient descent (SGD) optimizer, choosing the total batch size, initial learning rate, momentum, and weight decay as 256, 0.01, 0.9, 0.0005, respectively. |
| Experiment Setup | Yes | We train the model for 24 epochs using a stochastic gradient descent (SGD) optimizer, choosing the total batch size, initial learning rate, momentum, and weight decay as 256, 0.01, 0.9, 0.0005, respectively. We set parameters (α, β, γ, c, m) as (0.9, 0.5, 1, 16, 49). |