Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

MMNet: Muscle Motion-Guided Network for Micro-Expression Recognition

Authors: Hanting Li, Mingzhe Sui, Zhaoqing Zhu, Feng Zhao

IJCAI 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three public micro-expression datasets demonstrate that our approach outperforms stateof-the-art methods by a large margin.
Researcher Affiliation Academia University of Science and Technology of China EMAIL, EMAIL
Pseudocode No No pseudocode or clearly labeled algorithm block was found.
Open Source Code Yes Code is available at https://github.com/muse1998/MMNet.
Open Datasets Yes To verify the effectiveness of our MMNet, we conduct extensive experiments on three popular micro-expression datasets including CASME II [Yan et al., 2014], SAMM [Davison et al., 2016], and MMEW [Ben et al., 2021].
Dataset Splits Yes Consistent with most of previous works, leave-one-subject-out (LOSO) cross-validation is employed in all the experiments, which means every subject is taken as a testing set in turn and the rest subjects as the training data.
Hardware Specification Yes All the experiments are conducted on a single NVIDIA RTX 3070 card with Py Torch toolbox.
Software Dependencies No The paper mentions 'Py Torch toolbox' but does not specify a version number.
Experiment Setup Yes At the training stage, we adopt Adam W to optimize the MMNet with a batch size of 32. The learning rate is initialized to 0.0008, decreased at an exponential rate in 70 epochs for cross-entropy loss function. To avoid overfitting, we randomly pick a frame from four frames around the labeled onset and apex frames as the onset frame and apex frame for training. The horizontal flipping, random cropping, and color jittering are also employed.