LARNet: Lie Algebra Residual Network for Face Recognition

Authors: Xiaolong Yang, Xiaohong Jia, Dihong Gong, Dong-Ming Yan, Zhifeng Li, Wei Liu

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experimental evaluations on both frontal-profile face datasets and general face recognition datasets convincingly demonstrate that our method consistently outperforms the state-of-the-art ones.4. Experimental Results In this section, we first provide a description of implementation details (Sec. 4.1). Besides, we list all the datasets used in the experiments and briefly explain their own characteristics (Sec. 4.2). Furthermore, we present two ablation studies on the architecture and gating control function, respectively, which explain the effectiveness of our experimental design on recognition performance (Sec. 4.3). We also compare with existing methods and some findings about profile face representation, and conduct extensive experiments on frontal-profile face verification-identification and general face recognition tasks (Sec. 4.4).
Researcher Affiliation Collaboration 1Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, P.R.China 2University of Chinese Academy of Sciences, Beijing, P.R.China 3Tencent Data Platform, P.R.China 4NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, P.R.China. Correspondence to: Xiaohong Jia, Zhifeng Li, and Wei Liu <xhjia@amss.ac.cn; michaelzfli@tencent.com; wl2223@columbia.edu>.
Pseudocode No The paper describes the architecture and method in paragraph text and figures, but does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code, such as a specific repository link, an explicit code release statement, or code in supplementary materials.
Open Datasets Yes We separately employ the two most widely used face datasets as training data in order to conduct fair comparison with the other methods, i.e., cleaned MS-Celeb1M database (MS1MV2) (Guo et al., 2016) and CASIAWeb Face (Yi et al., 2014).
Dataset Splits No The paper mentions 'Data Preprocessing' and 'Training Details' but does not provide specific training/validation/test dataset splits (exact percentages, sample counts, or detailed splitting methodology). It refers to 'protocols of training and testing' for datasets but does not elaborate on the splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software components and models like 'Arcface', 'MTCNN', 'SGD optimizer', 'PReLU', and 'ResNet-50', but does not provide specific version numbers for any of these or other ancillary software dependencies.
Experiment Setup Yes The model is trained with 180K iterations. The initial learning rate is set as 0.1, and is divided by 10 after 100K, 160K iterations. The SGD optimizer has momentum 0.9, and weight decay 5e 4.