CMNet: Contrastive Magnification Network for Micro-Expression Recognition
Authors: Mengting Wei, Xingxun Jiang, Wenming Zheng, Yuan Zong, Cheng Lu, Jiateng Liu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on three public ME databases i.e. CASME II, SAMM, and SMIC-HS validate the superiority against state-of-the-art methods. |
| Researcher Affiliation | Academia | 1Key Laboratory of Child Development and Learning Science of Ministry of Education 2School of Biological Science and Medical Engineering, Southeast University, Nanjing, China 3School of Information Science and Engineering, Southeast University, Nanjing, China {weimengting,jiangxingxun,wenming zheng,xhzongyuan,cheng.lu,jiateng liu}@seu.edu.cn |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of the described methodology. |
| Open Datasets | Yes | Datasets We conducted experiments on three public spontaneous ME datasets : SMIC-HS (Li et al. 2013), CASME II (Yan et al. 2014), SAMM (Davison et al. 2016). |
| Dataset Splits | Yes | We utilize the Leave-One-Subject Out (LOSO) cross-validation as the protocal to evaluate the performance of the proposed method. Specifically, for each database, there are totally W folds (subjects) experiments. In each fold, the testing set collects the samples from one particular subject while the training set collects the samples from the remaining subjects. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | Parameter Settings We utilize SGD optimizer with momentum = 0.9 and weight decay = 1e-4. The learning rate is set to 0.001 and reduces by half every 50 epochs. The data augmentation strategy used for augmenting anchor samples are Color Jitter, Random Grayscale and Gaussian Blur. To keep the three losses on the same scale, we set λ1 = 0.07, λ2 = 0.71, λ3 = 0.22 respectively. These parameters are chosen by implementing a grid search on the CASME II dataset, where the set of parameters with the highest performance are fixed and used when conducting on the other two datasets. The minimum mean ε is set as 0.2, and the margin in LW rst is a fixed parameter with ξ = 0.01. |