Dual-Reference Face Retrieval

Authors: BingZhang Hu, Feng Zheng, Ling Shao

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiments show promising results, outperforming hierarchical methods.
Researcher Affiliation Collaboration 1School of Computing Sciences, University of East Anglia, Norwich, UK 2Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, USA 3JD Artificial Intelligence Research (JDAIR), Beijing, China
Pseudocode No The paper describes methods and a network architecture, but does not include explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes In the experiment, we evaluate our DRFR on three face recognition and age estimation datasets: Cross-Age Celebrity Dataset(CACD) (Chen, Chen, and Hsu 2014), FGNet (Lanitis and Cootes 2002), and MORPH (Ricanek and Tesafaye 2006).
Dataset Splits No The paper states, "we take 60% data as training data and the remaining for the test," but does not explicitly mention a separate validation set or its split percentage.
Hardware Specification No The paper does not specify any hardware components (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper does not list any specific software dependencies or their version numbers required for reproducibility.
Experiment Setup Yes For the hyper-parameters, we set the ε in Eq. 11 as 5 to calculate the similarity matrix set S, and the embeddings size on the joint manifold is set as 128. [...] we employed an online quartet selection protocol which is inspired by (Chen et al. 2017). During training, the images of an entire mini batch are firstly propagated forward to extract the embeddings with the current model, then those quartets which violate the average margin in this mini batch will be selected to train the network.