Cluster and Aggregate: Face Recognition with Large Probe Set

Authors: Minchul Kim, Feng Liu, Anil K Jain, Xiaoming Liu

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on IJB-B and IJB-S benchmark datasets show the superiority of the proposed two-stage paradigm in unconstrained face recognition.
Researcher Affiliation Academia Minchul Kim Department of Computer Science Michigan State University East Lansing, MI 48824 kimminc2@msu.edu Feng Liu Department of Computer Science Michigan State University East Lansing, MI 48824 liufeng6@msu.edu Anil Jain Department of Computer Science Michigan State University East Lansing, MI 48824 jain@msu.edu Xiaoming Liu Department of Computer Science Michigan State University East Lansing, MI 48824 liuxm@msu.edu
Pseudocode No The paper describes the proposed approach with text and diagrams but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Code and pretrained models are available in Link.
Open Datasets Yes We use Web Face4M [57] as our training dataset. It is a large-scale dataset with 4.2M facial images from 205, 990 identities. ... We test on IJB-B [51], IJB-C [35] and IJB-S [21] datasets.
Dataset Splits No We use Web Face4M [57] as our training dataset. ... We test on IJB-B [51], IJB-C [35] and IJB-S [21] datasets. ... For IJB-S, we use protocols, Surv.-to-Single, Surv.-to-Booking and Surv.-to-Surv. (No explicit train/validation/test splits defined for their overall experimental setup).
Hardware Specification No The paper discusses computation efficiency and GPU memory usage but does not provide specific details on the hardware used for experiments, such as GPU/CPU models or memory specifications.
Software Dependencies No The paper mentions software components and models like 'IRes Net-101' and 'Arc Face loss' but does not specify version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes The training hyper-parameters such as optimizers are detailed in Supp.A.