AnchorFace: Boosting TAR@FAR for Practical Face Recognition

Authors: Jiaheng Liu, Haoyu Qin, Yichao Wu, Ding Liang1711-1719

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental extensive experimental results on multiple benchmark datasets demonstrate the effectiveness of Anchor Face.
Researcher Affiliation Collaboration 1 State Key Lab of Software Development Environment, Beihang University 2 Sense Time Group Limited
Pseudocode Yes Algorithm 1: Anchor Face
Open Source Code No The paper does not explicitly state that the source code is released or provide a link to a code repository.
Open Datasets Yes For the training dataset, we follow many existing works to employ the refined version of MS-1M (Guo et al. 2016) dataset provided by (Deng et al. 2019), which consists of about 85k identities with 5.8M images.
Dataset Splits No The paper mentions separate datasets for training (MS-1M) and testing (IJB-B, IJB-C, IFRT) but does not provide specific percentages, counts, or detailed methodology for train/validation/test splits within these datasets.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers).
Experiment Setup Yes For the training process of Anchor Face, the initial learning rate is 0.1 and divided by 10 at the 110k, 190k, 220k iterations. The batch size and the total iteration are set as 512 and 240k, respectively. For the onlineupdating set S, by default, we set the maximum number of features of each identity (i.e., K) and the maximum number of valid steps for each feature (i.e., M) as 5 and 1000, respectively. We set τ as 0.01 in Eq. 4 and Eq. 5. Besides, the loss weights of FAR loss Lf (i.e., λ1) are set as 1k for the Anchor FARs of 1e-4. The loss weight of TAR loss Lt (i.e., λ2) is set as 10.