Biased Feature Learning for Occlusion Invariant Face Recognition

Authors: Changbin Shao, Jing Huo, Lei Qi, Zhen-Hua Feng, Wenbin Li, Chuanqi Dong, Yang Gao

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate its superiority as well as the generalization capability with different network architectures and loss functions. The paper includes a dedicated section titled 5 Experiments, detailing the Dataset, Model, and Network training, along with tables and figures showing performance metrics like accuracy for various setups and comparisons.
Researcher Affiliation Academia 1 State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China 2 School of Computer, Jiangsu University of Science and Technology, Zhenjiang, China 3 Department of Computer Science, and the Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, UK
Pseudocode No The paper describes the proposed methods using prose and mathematical formulations but does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper states regarding the O-LFW dataset: 'It will be released for further studies in occlusion-invariant FR.' This refers to the dataset, not the source code for their methodology. No other statements or links regarding code availability are provided.
Open Datasets Yes We use CASIA-Web Face (10575 classes with 0.49M samples) as the training set of IN, and synthesize the same number of virtual samples as its occluded version IO (we simply set n = o).
Dataset Splits No The paper describes the training set (CASIA-Web Face) and the test set (O-LFW) and their construction. It discusses evaluating performance between epochs (e.g., 'average accuracy between 11th and 20th epochs'), but it does not specify a distinct validation set or a training/validation/test split for hyperparameter tuning or early stopping from the CASIA-Web Face dataset.
Hardware Specification No The paper does not specify any hardware used for running the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper states that models are trained 'with the Adam optimizer in Py Torch'. While these are specific software components, no version numbers are provided for PyTorch or the Adam optimizer, which are necessary for reproducible software dependencies.
Experiment Setup Yes The batch size is separately set as 128 for original IN and 256 for hybrid IN+IO. For all the experiments, random horizontal flip is applied to the training images. The softmax loss is used and the learning rate (lr) is set to 5e 4 in subsection 5.1 and 5.2. We set lr=1e-4 for R18 with the Arc Face loss to avoid non-convergence, and lr=5e-4 for all the others. After 100 epochs, average results of the last 5 epochs are reported. We apply random Crop (0.8-1.0) to the training images. We use the L2 regularization with the weight decay of 0.01 following Adam W.