Enhanced Face Recognition using Intra-class Incoherence Constraint
Authors: Yuanqing Huang, Yinggui Wang, Le Yang, Lei Wang
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate this recombination is likely to contribute to an improved facial feature representation, even better than features from the original superior model. Motivated by this discovery, we further consider how to improve FR accuracy when there is only one FR model available. Inspired by knowledge distillation, we incorporate the intraclass incoherence constraint (IIC) to solve the problem. Experiments on various FR benchmarks show the existing state-of-the-art method with IIC can be further improved, highlighting its potential to further enhance FR performance. |
| Researcher Affiliation | Collaboration | Yuanqing Huang1 Yinggui Wang1 Le Yang2 Lei Wang1 Ant Group1 University of Canterbury2 |
| Pseudocode | No | The paper describes methods and processes but does not include any explicitly labeled pseudocode or algorithm blocks with structured steps. |
| Open Source Code | No | The paper does not contain an explicit statement about the release of source code, nor does it provide a link to a code repository for the described methodology. |
| Open Datasets | Yes | We use CASIA (Yi et al., 2014) and MS1M-Arc Face (also known as MS1MV2) (Guo et al., 2016) as the training datasets. |
| Dataset Splits | No | The paper mentions training datasets (CASIA, MS1MV2) and test datasets/benchmarks (LFW, CFP-FP, Age DB, CALFW, CPLFW, VGG2, IJB-C) but does not explicitly specify internal validation dataset splits used during model training. |
| Hardware Specification | No | The paper does not specify any particular hardware components such as GPU or CPU models, memory, or cloud computing instance types used for experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the implementation, such as Python, PyTorch, or TensorFlow versions. |
| Experiment Setup | Yes | We train the model with Res Net50 and Res Net100 (He et al., 2016) as the backbone and batch size of 512 using the metric and loss functions similar to the specified definition in their original text. The head of the baseline model is: Back Bone-Flatten-FC-BN with embedding dimensions of 512 and the dropout probability of 0.4 to output the embedding feature. Unless specified otherwise, models are trained for 50 epochs using the SGD optimizer with a momentum of 0.9, and a weight decay of 0.0001. The model is trained with SGD with an initial learning rate of 0.1 and step scheduling at 10, 20, 30 and 40 epochs. For the scale parameters, we set it to 64, following the suggestion of Wang et al. (2018b). |