Optimal Bayesian Hashing for Efficient Face Recognition
Authors: Qi Dai, Jianguo Li, Jun Wang, Yurong Chen, Yu-Gang Jiang
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental evaluations and comparative studies clearly demonstrate that the proposed Bayesian Hashing approach outperforms other peer methods in both accuracy and speed. We conduct extensive experiments on two popular face benchmark datasets, i.e., the FRGC and the LFW. |
| Researcher Affiliation | Collaboration | 1Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China 2Intel Labs China, Beijing, China 3Institute of Data Science and Technology, Alibaba Group, Seattle, WA, USA |
| Pseudocode | Yes | Table 1: Boosted FERNs training algorithm using Gentle Boost. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for their method is publicly available. |
| Open Datasets | Yes | FRGC: The FRGC version-2 [Phillips et al., 2005] was designed to be a comprehensive benchmark for face recognition... LFW: The popular LFW dataset consists of 13,233 images of 5,749 individuals, and all the images are collected from the Internet. |
| Dataset Splits | Yes | The evaluation dataset is divided into 10 subsets to form a 10-fold cross-validation. In each trail, we use nine subsets for training and one for testing. By introducing the Sequential Forward Floating Search (SFFS), we are able to select an optimal set of permutation models to achieve improved performance without additional overhead...select good and complementary ones by evaluating on a separate validation set. |
| Hardware Specification | No | The paper does not mention any specific hardware used for running the experiments (e.g., CPU, GPU models, memory). |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, specific libraries with versions). |
| Experiment Setup | Yes | Each face is normalized to 128x128 according to these landmarks. We extract n=240 patches from the normalized faces according to the landmark positions with different patch sizes. A mirror image is generated for each normalized face and another n=240 patches can be extracted. Therefore, there are totally 480 GLOH patches, and each patch is described by a 136-dimensional feature. In our implementation, we group every 8-bit (S = 8) together to one byte F. In addition, to reduce possible redundancy among different bytes, we adopt the Boosted framework. We take each FERN as a weak classifier, and train a Gentle Boost, a variant of Ada Boost classifiers, to ensemble different FERN bytes. During the training procedure, we restrict that each FERN byte can be chosen only once to avoid redundancy. Finally, we can obtain a decision function for recognition as... where T <= M is the number of bytes used/picked. |