CQA-Face: Contrastive Quality-Aware Attentions for Face Recognition
Authors: Qiangchang Wang, Guodong Guo2504-2512
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | It outperforms the state-of-the-art methods on several benchmarks, demonstrating its effectiveness and usefulness. |
| Researcher Affiliation | Collaboration | 1Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, USA 2Institute of Deep Learning, Baidu Research, Beijing, China 3National Engineering Laboratory for Deep Learning Technology and Application, Beijing, China |
| Pseudocode | No | The paper describes the algorithms using textual explanations and mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code for the described methodology or a direct link to a code repository. |
| Open Datasets | Yes | VGGFace2 (Cao et al. 2018) and MS-Celeb-1M (Guo et al. 2016) are used as the training data. |
| Dataset Splits | No | The paper mentions using training and testing datasets, but does not explicitly describe a validation split (e.g., specific percentages or counts for a validation set) from any single dataset. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using Cos Face loss and ResNet-100 as the stem CNN but does not provide version numbers for any specific software libraries, frameworks, or programming languages used (e.g., Python version, PyTorch/TensorFlow version). |
| Experiment Setup | Yes | The number of warm-up epochs is 2. The batch-size is set to 256. During training on VGGFace2, learning rate starts at 0.03 and is divided by 10 at the 7th and 10th epochs, respectively. The learning rate is set to 1e-4 at the 12th epoch. Training stops at the 12th epoch. During training on MS1MV2, learning rate is 0.03 and is divided by 10 at the 13th and 19th epochs, respectively. It is set to 1e-4 at the 23rd epoch. Training stops at the 24th epoch. The Res Net-100 used as the stem CNN. After comparative experiments, the number of local branches (b) is set to 4. The σ and t in Eqn. (2) are set to 0.01 and 0.2, respectively. The λ in Eqn. (9) is 0.5. |