Contrastive Adversarial Learning for Person Independent Facial Emotion Recognition
Authors: Daeha Kim, Byung Cheol Song5948-5956
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, the proposed adversarial learning scheme was theoretically verified, and it was experimentally proven to show state of the art (SOTA) performance.Quantitative evaluation: Table 1 shows the experimental results for the Affect Net dataset. The proposed method is better than the latest algorithms such as FHC (Kossaifiet al. 2020) and Bre G-Ne Xt (Hasani, Negi, and Mahoor 2020). |
| Researcher Affiliation | Academia | Daeha Kim, Byung Cheol Song Department of Electronic Engineering, Inha University, Incheon 22212, South Korea kdhht5022@gmail.com, bcsong@inha.ac.kr |
| Pseudocode | Yes | Algorithm 1 describes the overview of CAF. |
| Open Source Code | Yes | Software is available at https://github.com/kdhht2334/ Contrastive-Adversarial-Learning-FER |
| Open Datasets | Yes | Affect Net (Mollahosseini, Hasani, and Mahoor 2017) dataset consists of over a million images... AFEW-VA (Kossaifiet al. 2017) dataset is derived from the AFEW (Dhall et al. 2016) dataset... Aff-Wild (Zafeiriou et al. 2017) dataset consists of about 300 videos... |
| Dataset Splits | Yes | The test dataset is not released. So, a part of the training dataset is randomly selected and used as an evaluation dataset in this paper, same as (Hasani, Negi, and Mahoor 2020). |
| Hardware Specification | Yes | All experiments were performed on the Intel Xeon CPU and Ge Force GTX 1080 TI, with five training sessions per experiment 1. |
| Software Dependencies | No | The paper mentions using Adam optimizer and Alex Net/Res Net18 as backbones, but does not provide specific version numbers for software libraries like Python, PyTorch, TensorFlow, or CUDA. |
| Experiment Setup | Yes | Encoder, critic, and FC layers were optimized through the learning rate of Adam (Kingma and Ba 2014) optimizer with 1e-4. The minibatch size of Alex Net and Res Net18 were set to 256 and 128, respectively. The parameters were updated through 50,000 iterations for the Affect Net and AFEW-VA datasets, and 100,000 iterations for the Aff-Wild dataset. We reduced the learning rate by 0.8 times every 10k iterations. |