Interpretable Tensor Fusion

Authors: Saurabh Varshneya, Antoine Ledent, Philipp Liznerski, Andriy Balinskyy, Purvanshi Mehta, Waleed Mustafa, Marius Kloft

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on six realworld datasets show that In Tense outperforms existing state-of-the-art multimodal interpretable approaches in terms of accuracy and interpretability.
Researcher Affiliation Collaboration Saurabh Varshneya1 , Antoine Ledent2 , Philipp Liznerski1 , Andriy Balinskyy1 , Purvanshi Mehta3 , Waleed Mustafa1 and Marius Kloft1 1RPTU Kaiserslautern-Landau 2Singapore Management University 3Lica World
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Full paper with technical appendix and code is available at: https://arxiv.org/abs/2405.04671
Open Datasets Yes To evaluate In Tense in sentiment analysis, we employed CMU-MOSEI [Bagher Zadeh et al., 2018], the largest dataset of sentence-level sentiment analysis for real-world online videos, and CMU-MOSI [Zadeh et al., 2016], a collection of annotated opinion video clips... To assess our approach s effectiveness in these tasks, we utilized UR-FUNNY [Hasan et al., 2019] for humor detection and MUSt ARD [Castro et al., 2019] for sarcasm detection... For this paper, we considered the ENRICO [Leiva et al., 2020] dataset as an example for layout design categorization. We also include results for Audiovision-MNIST (AV-MNIST) [Vielzeuf et al., 2018], a multimodal dataset comprising images of handwritten and recordings of spoken digits.
Dataset Splits Yes In order to compare performance and ensure reproducibility, we followed the experimental setup (e.g., data preprocessing, encodings of different modalities) of the Multi Bench [Liang et al., 2021] benchmark for all the experiments.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments (e.g., GPU/CPU models, memory specifications).
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9').
Experiment Setup No The paper states that it 'followed the experimental setup (e.g., data preprocessing, encodings of different modalities) of the Multi Bench [Liang et al., 2021] benchmark', but it does not explicitly list concrete hyperparameter values or detailed training configurations within its own text.