Towards Robust Multimodal Sentiment Analysis with Incomplete Data

Authors: Haoyu Zhang, Wenbin Wang, Tianshu Yu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform comprehensive experiments under random data missing scenarios, utilizing diverse and meaningful settings on several popular datasets (e.g., MOSI, MOSEI, and SIMS), providing additional uniformity, transparency, and fairness compared to existing evaluations in the literature. Empirically, LNLN consistently outperforms existing baselines, demonstrating superior performance across these challenging and extensive evaluation metrics.
Researcher Affiliation Academia Haoyu Zhang1,2, Wenbin Wang3, Tianshu Yu1, 1School of Data Science, The Chinese University of Hong Kong, Shenzhen 2Department of Computer Science, University College London 3School of Computer Science, Wuhan University
Pseudocode No The paper describes its methodology using textual descriptions and mathematical equations, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes The code is available at: https://github.com/Haoyu-ha/LNLN
Open Datasets Yes Addressing this gap, our paper aims to offer a comprehensive evaluation on three widely-used datasets, namely MOSI (Zadeh et al., 2016), MOSEI (Zadeh et al., 2018) and SIMS (Yu et al., 2020) datasets.
Dataset Splits Yes MOSI. The dataset includes 2,199 multimodal samples, integrating visual, audio, and language modalities. It is divided into a training set of 1,284 samples, a validation set of 229 samples, and a test set of 686 samples. MOSEI. The dataset consists of 22,856 video clips sourced from You Tube. The sample is divided into 16,326 clips for training, 1,871 for validation, and 4,659 for testing. SIMS. The dataset is a Chinese multimodal sentiment dataset that includes 2,281 video clips sourced from different movies and TV series. It has been partitioned into 1,368 samples for training, 456 for validation, and 457 for testing.
Hardware Specification Yes The experiments were conducted on a PC with an AMD EPYC 7513 CPU and an NVIDIA Tesla A40.
Software Dependencies Yes We used PyTorch 2.2.1 to implement the method.
Experiment Setup Yes We used PyTorch 2.2.1 to implement the method. The experiments were conducted on a PC with an AMD EPYC 7513 CPU and an NVIDIA Tesla A40. To ensure consistent and fair comparisons across all methods, we conducted each experiment three times using fixed random seeds of 1111, 1112, and 1113. Details of the hyperparameters are shown in Table 1. Table 1: Hyperparameters of LNLN we use on the different datasets (lists Vector Length T, Vector Dimension d, Batch Size, Initial Learning Rate, Loss Weight α, β, γ, δ, Optimizer, Epochs, Warm Up, Early Stop, Seed with specific values for each dataset).