Incremental Boosting Convolutional Neural Network for Facial Action Unit Recognition

Authors: Shizhong Han, Zibo Meng, AHMED-SHEHAB KHAN, Yan Tong

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on four benchmark AU databases have demonstrated that the IB-CNN yields significant improvement over the traditional CNN and the boosting CNN without incremental learning, as well as outperforming the state-of-the-art CNN-based methods in AU recognition.
Researcher Affiliation Academia Department of Computer Science & Engineering, University of South Carolina, Columbia, SC {han38, mengz, akhan}@email.sc.edu, tongy@cse.sc.edu
Pseudocode Yes Algorithm 1 Incremental Boosting Algorithm for the IB-CNN
Open Source Code No The paper does not contain any explicit statements about releasing source code for their method or provide a link to a code repository.
Open Datasets Yes Experimental results on four benchmark AU-coded databases, i.e., Cohn-Kanade (CK) [25] databse, FERA2015 SEMAINE database [11], FERA2015 BP4D database [11], and Denver Intensity of Spontaneous Facial Action (DISFA) database [12]
Dataset Splits Yes All the models compared were trained on the training set and evaluated on the validation set. The training-testing process was repeated 5 times. The mean and standard deviation of F1 score and two-alternative forced choice (2AFC) score are calculated from the 5 runs for each target AU. ... A 9-fold cross-validation strategy is employed for the DISFA database, where 8 subsets of 24 subjects were utilized for training and the remaining one subset of 3 subjects for testing.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for the experiments.
Software Dependencies No The proposed IB-CNN is implemented based on a modification of cifar10_quick in Caffe [28]. (No version number for Caffe or other libraries is provided).
Experiment Setup Yes The stochastic gradient descent, with a momentum of 0.9 and a mini-batch size of 100, is used for training the CNN for each target AU.