Gradient-Guided Modality Decoupling for Missing-Modality Robustness

Authors: Hao Wang, Shengda Luo, Guosheng Hu, Jianguo Zhang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on three popular multimodal benchmarks, including Bra TS 2018 for medical segmentation, CMU-MOSI, and CMU-MOSEI for sentiment analysis. The results show that our method can significantly outperform the competitors, showing the effectiveness of the proposed solutions.
Researcher Affiliation Collaboration Hao Wang1, Shengda Luo1, Guosheng Hu2, Jianguo Zhang1,3* 1Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China 2Oosto, BT1 2BE, Belfast, U.K. 3Peng Cheng Lab, Shenzhen, China
Pseudocode No The paper describes the methods GMD and DS in prose and with equations, but it does not include a formal pseudocode block or algorithm listing.
Open Source Code Yes Our code is released here: https://github.com/HaoWang420/Gradientguided-Modality-Decoupling.
Open Datasets Yes Multimodal Brain Tumor Segmentation Challenge (Bra TS 2018) (Menze et al. 2015) dataset is comprised of 285 multi-contrast MRI scans... The CMU-MOSI dataset (Zadeh et al. 2016) is a popular benchmark for Multimodal Sentiment Analysis (MSA). The CMU-MOSEI dataset (Zadeh et al. 2018) is an enhanced version of CMU-MOSI with a more extensive collection of utterances and improved variety in samples, speakers, and topics.
Dataset Splits No The paper mentions 'After training, models are tested against all missing-modality scenarios' but does not explicitly provide information about a validation set or specific train/validation/test split ratios or counts.
Hardware Specification Yes Experiments are conducted on 4 RTX3090 with 24 GB memory.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9').
Experiment Setup No The paper describes evaluation metrics and some high-level strategy (like sampling k combinations, where k=5), but it does not provide specific hyperparameters such as learning rate, batch size, or number of epochs for the experimental setup.