Joint Multimodal Aspect Sentiment Analysis with Aspect Enhancement and Syntactic Adaptive Learning

Authors: Linlin Zhu, Heli Sun, Qunshu Gao, Tingzhou Yi, Liang He

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on benchmark datasets show that our method outperforms state-of-the-art methods. We conduct experiments on two public Twitter datasets [Yu and Jiang, 2019] (i.e., Twitter-2015 and Twitter2017). As shown in Table 1, sentences with multiple aspects make up a significant portion of both datasets. We use these datasets for aspect enhancement pre-training and subsequent experiments. Implementation Details. We implement our method under Linux system, CUDA version 10.2, Pytorch version 1.12.0, Python version 3.9, and NVIDIA Ge Force RTX 3090. In addition, we set the Learning rate to 2e-5, the dropout to 0.1, hidden size to 768. Evaluation Metrics. We evaluate the performance of our model on JMASA task and MATE task by Micro-F1 score (F1), Precision (P) and Recall (R), while on MASC task we use Accuracy (Acc) and F1 following previous studies.
Researcher Affiliation Academia Linlin Zhu , Heli Sun , Qunshu Gao , Tingzhou Yi , Liang He Xi an Jiaotong University {zhulinlin, jiubaoyibao}@stu.xjtu.edu.cn, 13673559526@163.com, {hlsun, lhe}@xjtu.edu.cn
Pseudocode No The paper describes the model architecture and processes verbally and with diagrams (Figure 2), but it does not include any formal pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about providing open-source code for the methodology or a link to a code repository.
Open Datasets Yes We conduct experiments on two public Twitter datasets [Yu and Jiang, 2019] (i.e., Twitter-2015 and Twitter2017).
Dataset Splits Yes Table 1: The basic statistics of two Twitter datasets. Twitter-2015 Train Dev Test: Positive 928 303 317, Neutral 1,883 670 607, Negative 368 149 113. Twitter-2017 Train Dev Test: Positive 1,508 515 493, Neutral 1,638 517 573, Negative 416 144 168.
Hardware Specification Yes We implement our method under Linux system, CUDA version 10.2, Pytorch version 1.12.0, Python version 3.9, and NVIDIA Ge Force RTX 3090.
Software Dependencies Yes We implement our method under Linux system, CUDA version 10.2, Pytorch version 1.12.0, Python version 3.9, and NVIDIA Ge Force RTX 3090.
Experiment Setup Yes We implement our method under Linux system, CUDA version 10.2, Pytorch version 1.12.0, Python version 3.9, and NVIDIA Ge Force RTX 3090. In addition, we set the Learning rate to 2e-5, the dropout to 0.1, hidden size to 768.