Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Robult: Leveraging Redundancy and Modality-Specific Features for Robust Multimodal Learning

Authors: Duy A. Nguyen, Abhi Kamboj, Minh N. Do

IJCAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results across diverse datasets validate that Robult achieves superior performance over existing approaches in both semi-supervised learning and missing modality contexts.
Researcher Affiliation Academia Duy A. Nguyen1,3 , Abhi Kamboj2 , Minh N. Do2,3 1Siebel School of Computing and Data Science, UIUC, US 2Department of Electrical and Computer Engineering, UIUC, US 3Vin Uni-Illinois Smart Health Center, Vin University, Vietnam EMAIL
Pseudocode No The paper describes its methodology, objectives, and loss functions in detail using mathematical formulations and textual descriptions, but it does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete statement about the release of source code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes CMU-MOSI [Zadeh et al., 2016] & CMU-MOSEI [Zadeh et al., 2018b]: Containing three modalities text, audio, and visual, supporting sentiment analysis and emotion recognition tasks. Each video is labeled on a scale from -3 (negative) to 3 (positive) sentiment. MM-IMDb [Arevalo et al., 2017]: Containing image and text modalities, serving genre classification task which involves multi-label classification as a movie has several genres. UPMC Food-101 [Wang et al., 2015]: Containing two modalities, text and images, this dataset is a classification dataset consisting of 101 food categories. Hateful Memes [Kiela et al., 2020]: Containing text and image modalities, this dataset aims at identifying hate speech in memes.
Dataset Splits Yes The primary focus of our performance reporting is on two extreme scenarios: semi-supervised settings with only 5% labeled data and scenarios where only a single modality is presented during evaluation. All reported results are averaged over 3 different random seeds. In the semi-supervised setup, the newly created labeled portion is ensured to maintain the correct label ratio as the original training sets.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the implementation.
Experiment Setup Yes Implementation details: To ensure a fair comparison, we use similar encoder architectures for processing raw data modalities whenever possible. The unimodal baselines are designed with the same architectures as Robult, each with its own classifier. For Robult, positive samples for the soft P-U loss are determined after discretizing labels if needed. Specifically, in the cases of CMU-MOSI and CMU-MOSEI datasets, label information in the range of [ 3, 3] is quantified into 7 discrete categories ( 3, 2, . . . , 3). Additionally, for the multi-label dataset MM-IMDb, two samples are considered positive if they share all the same labels. [...] Specific details on implementation settings relating to each dataset are provided in Appendix C.2.