Open-Set Facial Expression Recognition
Authors: Yuhang Zhang, Yue Yao, Xuannan Liu, Lixiong Qin, Wenjing Wang, Weihong Deng
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on various FER datasets demonstrate that our method clearly outperforms state-of-the-art open-set recognition methods by large margins. |
| Researcher Affiliation | Academia | Yuhang Zhang1, Yue Yao2, Xuannan Liu1, Lixiong Qin1, Wenjing Wang1, Weihong Deng1 1Beijing University of Posts and Telecommunications 2Australian National University {zyhzyh, liuxuannan, lxqin, wwj311, whdeng}@bupt.edu.cn, yue.yao@anu.edu.au |
| Pseudocode | No | The paper includes a 'Pipeline' diagram in Figure 3, but it does not present any structured pseudocode or algorithm blocks with numbered steps or code-like formatting. |
| Open Source Code | Yes | Code is available at https://github.com/zyh-uaiaaaa. |
| Open Datasets | Yes | RAF-DB (Li, Deng, and Du 2017) is annotated with seven basic expressions...FERPlus (Barsoum et al. 2016) is extended from FER2013 (Goodfellow et al. 2013)...Affect Net (Mollahosseini, Hasani, and Mahoor 2017) is a large-scale FER dataset... |
| Dataset Splits | No | The paper specifies training and testing sample counts for RAF-DB, FERPlus, and Affect Net (e.g., '12,271 images for training and 3,068 images for testing' for RAF-DB). However, it does not explicitly provide details for a separate validation dataset split. |
| Hardware Specification | No | The paper states 'We utilize Res Net-18 as the backbone' but does not specify any hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam (Kingma and Ba 2014) optimizer' but does not provide specific version numbers for any software dependencies like Python, deep learning frameworks (e.g., TensorFlow, PyTorch), or CUDA. |
| Experiment Setup | Yes | The learning rate η is set to 0.0002 and we use Adam (Kingma and Ba 2014) optimizer with weight decay of 0.0001. The max training epoch Tmax is set to 40. |