Multi-label Co-regularization for Semi-supervised Facial Action Unit Recognition

Authors: Xuesong Niu, Hu Han, Shiguang Shan, Xilin Chen

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on several benchmarks show that the proposed approach can effectively leverage large datasets of unlabeled face images to improve the AU recognition robustness and outperform the state-of-the-art semi-supervised AU recognition methods.
Researcher Affiliation Academia 1 Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China 2 Peng Cheng Laboratory, Shenzhen, China 3 University of Chinese Academy of Sciences, Beijing 100049, China 4 CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available1. 1https://github.com/nxs Edson/MLCR
Open Datasets Yes Emotio Net is an in-the-wild database for facial AU recognition containing about 1M images downloaded from the Internet. 20,722 face images were provided with manual annotations for 12 AUs by experts. [8]
Dataset Splits Yes For all the experiments on BP4D, we conduct a subject-exclusive 3-fold cross-validation test following [16, 29].
Hardware Specification Yes All the experiments are conducted with Py Torch [24] on a Ge Force GTX 1080 Ti GPU.
Software Dependencies No The paper mentions software like PyTorch, ResNet-34, Adam optimizer, GCN, and SeetaFace2, but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes Two Res Net-34 models are chosen as the feature generators... For the experiments on Emotio Net and BP4D, we first pre-train the two feature generators by setting λmv = 400 and λcr = 100. The Adam optimizer with an initial learning rate of 0.001 is applied... The initial learning rate is set to 0.001 for GCN and 0.0001 for the two feature generators. We set the maximum iteration to 60 epochs for pre-training the feature generators and 20 epochs for fine-tuning the GCN and feature generators. The batch size for all the experiments is set to 100.