Masked Face Recognition with Generative-to-Discriminative Representations

Authors: Shiming Ge, Weijia Guo, Chenyu Li, Zhang Junzheng, Yong Li, Dan Zeng

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on synthetic and realistic datasets demonstrate the effectiveness of our approach in recognizing masked faces.
Researcher Affiliation Collaboration 1Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100092, China. 2School of Cyber Security at University of Chinese Academy of Sciences, Beijing 100049, China. 3Cloud Music Inc., Hangzhou 311215, China. 4Department of Communication Engineering, Shanghai University, Shanghai 200040, China.
Pseudocode No The paper describes the proposed framework and learning process in detail but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about making its source code available or provide a link to a code repository.
Open Datasets Yes We use Celeb-A (Liu et al., 2015) for generating synthetic training data, LFW (Huang et al., 2007) for synthetic masked face evaluation, and RMFD (Huang et al., 2021) and MLFW (Wang et al., 2022) for real-world masked face evaluation.
Dataset Splits Yes We randomly split it into training set and validation set with the ratio of 6 : 1.
Hardware Specification Yes We conduct efficiency analysis on a NVIDIA Ge Force RTX 3090 GPU by performing inference on 100 masked faces with size of 256 256.
Software Dependencies No The experiments are implemented on Pytorch. However, no specific version numbers for Pytorch or any other software dependencies are mentioned.
Experiment Setup Yes For generative encoder, we finetune ICT inpainting network with a batch size of 16 using Adam optimizer, where learning rate is 10 5 and β1 = 0.5, β2 = 0.9. For discriminative reformer... All models are trained with a batch size of 64 and SGD optimizer. The initial learning rate is 0.1 and decreases to 0.5 times every 16 epochs. The momentum and weight decay are set as 0.9 and 5 10 4, respectively.