G2RL: Geometry-Guided Representation Learning for Facial Action Unit Intensity Estimation

Authors: Yingruo Fan, Zhaojiang Lin

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on two benchmark datasets demonstrate that our method is comparable with the state-of-the-art approaches, and validate the effectiveness of incorporating external geometric knowledge for facial AU intensity estimation. 4 Experiments
Researcher Affiliation Academia 1Department of Electrical and Electronic Engineering, University of Hong Kong 2Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology yingruo@hku.hk, zlinao@connect.ust.hk
Pseudocode No The paper describes the methodology using text and mathematical equations, but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or a link to open-source code for the described methodology. It mentions using Dlib and TensorFlow but does not offer its own implementation code.
Open Datasets Yes We adopt two benchmark datasets, BP4D [Zhang et al., 2014] and DISFA [Mavadati et al., 2013], for our experiments.
Dataset Splits Yes In our experiments, we evaluate our method on BP4D using the official training/development partitions. While for DISFA, the 3-fold subject independent cross-validation is adopted for evaluation.
Hardware Specification Yes The framework is implemented in Tensorflow2 and NVIDIA Ge Force GTX 1080Ti GPUs are used.
Software Dependencies Yes The framework is implemented in Tensorflow2
Experiment Setup Yes In the training phase, we use the Adam optimizer [Kingma and Ba, 2014], with the base learning rate of 5e-4. For parameter setting, we set the value of the standard deviation σ to 2 in the heatmap ground-truth generation (Equation 1), and assign 0.05 to λ in the overall loss function (Equation 8) according to the performance of G2RL.