Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
On Hair Recognition in the Wild by Machine
Authors: Joseph Roth, Xiaoming Liu
AAAI 2014 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present an algorithm for identity verification using only information from the hair. ... The proposed hair matcher achieves 71.53% accuracy on the LFW View 2 dataset. Hair also reduces the error of a Commercial Off-The-Shelf (COTS) face matcher through simple score-level fusion by 5.7%. |
| Researcher Affiliation | Academia | Joseph Roth and Xiaoming Liu Department of Computer Science and Engineering Michigan State University, East Lansing, MI 48824 EMAIL |
| Pseudocode | Yes | Algorithm 1: Learning a hair matcher via boosting. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository for their methodology. |
| Open Datasets | Yes | To evaluate the proposed algorithm, we use the de facto database for unconstrained face recognition, LFW (Huang et al. 2007). LFW is popular due to its unconstrained nature, difficulty, well-defined protocol, and the availability of results of prior work. |
| Dataset Splits | Yes | We follow the restricted View 2 protocol, where 3, 000 genuine matches and 3, 000 impostor matches are divided into 10 equal size partitions for cross validation. |
| Hardware Specification | No | The paper does not specify any hardware used for running the experiments (e.g., CPU, GPU models). |
| Software Dependencies | No | The paper does not specify any software versions for libraries, frameworks, or programming languages used. |
| Experiment Setup | No | The paper describes the model (AdaBoost technique, feature types) and data processing (alignment, patch localization, feature extraction methods) but does not provide specific hyperparameters like learning rates, batch sizes, or optimizer settings used during training. |