Incomplete Multimodality-Diffused Emotion Recognition
Authors: Yuanzhi Wang, Yong Li, Zhen Cui
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform extensive experiments on publicly available MER datasets and achieve superior or comparable results across different missing modality patterns. |
| Researcher Affiliation | Academia | Yuanzhi Wang, Yong Li, Zhen Cui PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China. {yuanzhiwang, yong.li, zhen.cui}@njust.edu.cn |
| Pseudocode | No | The paper describes the method using equations and text but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Codes are released at https://github.com/mdswyz/IMDer. |
| Open Datasets | Yes | We consider two standard MER datasets to conduct experiments, including CMU-MOSI [32] and CMU-MOSEI [33]. |
| Dataset Splits | Yes | CMU-MOSI consists of 2199 monologue video clips. Where 1284, 229, and 686 samples are used as training, validation, and testing set. CMU-MOSEI contains 22856 samples of movie review video clips. Where 16326 samples are used for training, the remaining 1871 and 4659 samples are used for validation and testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions software tools like BERT, Facet, and COVAREP but does not specify their version numbers. |
| Experiment Setup | Yes | The optimal setting for β is set to 0.1 via the performance on the validation set. |