FaceCoresetNet: Differentiable Coresets for Face Set Recognition

Authors: Gil Shapira, Yosi Keller

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We set a new SOTA to set face verification on the IJB-B and IJB-C datasets. Our code is publicly available at https://github.com/ligaripash/Face Coreset Net. To allow for a valid comparison with the current SOTA, we strictly follow the experiments protocol of CAFace (Kim et al. 2022) (NeurIPS 22). Our training dataset is Web Face4M (Zhu et al. 2021), which contains 4.2 million facial images from 205,990 identities. We test on IJB-B (Whitelam et al. 2017) and IJB-C (Maze et al. 2018) datasets.
Researcher Affiliation Collaboration Gil Shapira1,2 and Yosi Keller 1 1 Bar-Ilan University, Ramat Gan, Israel 2 Samsung Semiconductor Israel R&D Center (SIRC)
Pseudocode Yes Listing 1: Differential Core-Template Selection 1 def CT_selection(F, K, F_norms, gamma): 2 """ 3 F: normalized features data, [B, N, C] 4 K: The intended core Temaplate ( CT) size (scalar) 5 F_norms: F norms, [B, N] 6 gamma: learned quality vs. diversity balance parameter ( scalar) 7 Return: 8 Core Template CT with size K 9 """ 10 tau = 1.0 if train else 1e-10 11 1_hot_max = torch.gumbel_softmax( F_norms, tau=tau, hard=True) 12 CT = F @ 1_hot_max 13 d_CT_to_F = quality_aware_dist(CT, F_norms, F) 14 for i in range(K-1): 15 1_hot_max = torch.gumbel_softmax ( d_CT_to_F, tau=tau, hard= True) 16 new_f = F @ 1_hot_max 17 d_new_f_to_CT = quality_aware_dist(new_f, norms, F, gamma) 18 d_CT_to_F = torch.min(d_CT_to_F, d_new_f_to_CT) 19 CT = torch.cat([CT, new_f]) 20 return CT 21 22 def quality_aware_dist(candidate_point, F_norms, F, gamma): 23 inner_product = torch.bmm(F, candidate_point) 24 cosine_dist = (1 inner_product) 25 quality_dist = torch.pow(norms, gamma) * cosine_dist 26 return quality_dist
Open Source Code Yes Our code is publicly available at https://github.com/ligaripash/Face Coreset Net.
Open Datasets Yes Our training dataset is Web Face4M (Zhu et al. 2021), which contains 4.2 million facial images from 205,990 identities. We test on IJB-B (Whitelam et al. 2017) and IJB-C (Maze et al. 2018) datasets as they are intended for template-based face recognition.
Dataset Splits No The paper states it follows the experiment protocol of CAFace (Kim et al. 2022) and uses WebFace4M for training and IJB-B/C for testing, but it does not explicitly specify the exact training/validation/test dataset splits (e.g., percentages or sample counts) it used for its models within the main text.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, or cloud instance specifications) used to run the experiments.
Software Dependencies No The paper mentions that the coreset selection is integrated into a 'Py Torch model' but does not provide specific version numbers for PyTorch or any other software dependencies, aside from citing 'fvcore. 2023. fvcore' for FLOPs calculation, which is not a dependency for their model.
Experiment Setup Yes We train Face Coreset Net for 2 epochs till convergence, compared with CAFace s 10 epochs training schedule. During training, a τ value of 1 is typically used, while during inference, τ approaches zero (τ 0+). The ideal coreset size was identified as 3, and this optimal value remained constant throughout all our subsequent experiments.