Relative Uncertainty Learning for Facial Expression Recognition
Authors: Yuhang Zhang, Chengrui Wang, Weihong Deng
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that RUL outperforms state-of-the-art FER uncertainty learning methods in both realworld and synthetic noisy FER datasets. |
| Researcher Affiliation | Academia | Yuhang Zhang, Chengrui Wang, Weihong Deng Beijing University of Posts and Telecommunications zyhzyh@bupt.edu.cn, crwang@bupt.edu.cn, whdeng@bupt.edu.cn |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | The code is available at https://github.com/zyh-uaiaaaa/Relative-Uncertainty-Learning. |
| Open Datasets | Yes | RAF-DB [27] is a crowdsourced facial expression dataset that contains 29672 facial images... FER2013 [14] consists of 35,887 grayscale 48x48 pixel images in total... Affect Net [33] is currently the largest FER dataset, including 440,000 images. |
| Dataset Splits | Yes | RAF-DB [27] ... 12271 images as training data and 3068 images as test data. FER2013 [14] ... with 28,709 training samples, 3,589 public test samples, and 3,589 private test samples. Affect Net [33] ... around 280,000 training images and 4000 testing images annotated by human. |
| Hardware Specification | Yes | The model is trained in an end-to-end manner with a single GTX 1080ti GPU for 70 epochs with batch size of 64. |
| Software Dependencies | No | The paper mentions software components like 'Res Net18', 'Adam optimizer', and 'Exponential LR' but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We set dropout rate as 0.4, output dimension as 64. The model is trained in an end-to-end manner with a single GTX 1080ti GPU for 70 epochs with batch size of 64. We also utilize an Adam optimizer [24] with weight decay of 0.0001. The learning rate is initialized as 0.0002 except the last fully connected layer for classification, which is 0.002. We use Exponential LR [30] learning rate scheduler with gamma of 0.9 to decrease the learning rate after each epoch. |