Quantum Cognitively Motivated Decision Fusion for Video Sentiment Analysis
Authors: Dimitris Gkoumas, Qiuchi Li, Shahram Dehdashti, Massimo Melucci, Yijun Yu, Dawei Song827-835
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on two benchmarking datasets illustrate that our model significantly outperforms various existing decision level and a range of state-of-the-art content-level fusion approaches. |
| Researcher Affiliation | Academia | 1 The Open University, Milton Keynes, UK 2 University of Padua, Padua, Italy 3 Queensland University of Technology, Brisbane, Australia 4 Beijing Institute of Technology, Beijing, China |
| Pseudocode | No | The paper describes the methodology in text but does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing its source code or a link to a code repository. |
| Open Datasets | Yes | We evaluated the proposed model on two benchmarking datasets, namely, CMU Multimodal Opinion-level Sentiment Intensity (CMU-MOSI) (Zadeh et al. 2016) and CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) (Bagher Zadeh et al. 2018). |
| Dataset Splits | Yes | We estimated the uni-modal observables from training plus validation sets, and then we used the learnt observables for predicting the utterance sentiment on the test set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions using 'MATLAB fsolve function' and techniques like 'Bi-GRU layers' and 'softmax layer', but does not specify exact version numbers for any key software components or libraries. |
| Experiment Setup | Yes | We randomly initialized the parameters {θG, θL, θV , θA, φL, φV , φA} [0, 2π] for uni-modal observable estimation, and {θT , φT } [0, 2π], ηT [0, 1] for utterance state estimation. The random initialization was repeated for 200 times to obtain the optimum solutions by calculating the minimum sum of squared loss. We used Bi-GRU layers (Cho et al. 2014) with forward and backward state concatenation, followed by fully connected layers. |