Dual Semi-Supervised Learning for Facial Action Unit Recognition
Authors: Guozhu Peng, Shangfei Wang8827-8834
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Within-database and cross-database experiments on three benchmark databases demonstrate the superiority of our method in both AU recognition and face synthesis compared to state-of-the-art works. |
| Researcher Affiliation | Academia | Key Lab of Computing and Communication Software of Anhui Province School of Computer Science and Technology, University of Science and Technology of China Hefei, Anhui, P.R.China, 230027 {gzpeng@mail., sfwang@}ustc.edu.cn |
| Pseudocode | Yes | The training procedure is shown as Algorithm 1. |
| Open Source Code | No | The paper does not provide any link or explicit statement about the availability of open-source code for the methodology described. |
| Open Datasets | Yes | Three benchmark databases are used in our experiments. The Extended Cohn-Kanade database (CK+) (Lucey et al. 2010), The MMI database (Pantic et al. 2005), The UNBC-Mc Master Shoulder Pain Expression Archive database (Pain) (Lucey et al. 2011) |
| Dataset Splits | Yes | We conduct within-database experiments via five fold subject-independent cross-validation and cross-database experiments. |
| Hardware Specification | No | The paper does not specify the hardware used for experiments (e.g., CPU, GPU models). |
| Software Dependencies | No | The paper mentions using 'Tensor Flow framework' and 'Adam algorithm' but does not specify any version numbers for these or other software dependencies. |
| Experiment Setup | Yes | We set α = 0.5 in our experiments to balance the distributions of pseudo-tuples generated from C and G. Discriminator D, classifier C, and generator G are parameterized through a four-layer feedforward network. We implement the proposed method using the Tensor Flow framework. Any gradient-based learning rule could be used to update parameters for the optimization method. We use the Adam (Kingma and Ba 2014) algorithm to optimize D, C, and G in our experiments. |