G^2SAM: Graph-Based Global Semantic Awareness Method for Multimodal Sarcasm Detection
Authors: Yiwei Wei, Shaozu Yuan, Hengyang Zhou, Longbiao Wang, Zhiling Yan, Ruosong Yang, Meng Chen
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that our model achieves state-of-the-art (SOTA) performance in multimodal sarcasm detection. Experiments Datasets The primary experiments were carried out using the publicly accessible multimodal sarcasm detection dataset (Cai, Cai, and Wan 2019). |
| Researcher Affiliation | Collaboration | Yiwei Wei1,5*, Shaozu Yuan2 , Hengyang Zhou5, Longbiao Wang1,4 , Zhiling Yan2, Ruosong Yang2, Meng Chen3 1Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China 2JD AI Research, Beijing, China 3Yep AI, Melbourne, Australia 4Huiyan Technology (Tianjin) Co., Ltd, Tianjin, China 5China University of Petroleum(Beijing) at Karamay, Karamay, China |
| Pseudocode | Yes | Algorithm 1: LGCL Algorithm |
| Open Source Code | Yes | The code will be available at https://github.com/upccpu/G2SAM. |
| Open Datasets | Yes | The primary experiments were carried out using the publicly accessible multimodal sarcasm detection dataset (Cai, Cai, and Wan 2019). |
| Dataset Splits | Yes | Dataset Label Train Val Test Positive 8642 959 959 Negative 11174 1451 1450 All 19816 2410 2409 Table 1: Statistics of the experimental data. |
| Hardware Specification | No | The paper mentions using pre-trained models and generally refers to GPUs for training but does not specify any particular GPU models, CPU models, or detailed hardware configurations used for the experiments. |
| Software Dependencies | Yes | To construct the textual graphs, we follow HKEmodel (Liu, Wang, and Li 2022), where the text tokens serve as graph nodes and relations extracted from spa Cy2 between words denote as edges. (2https://spacy.io/) |
| Experiment Setup | Yes | During training, we use Adam optimizer with a learning rate of 2e-5, weight decay of 5e-3, batch size of 64, and dropout rate of 0.5. Early stopping with a patience of 5 is applied to prevent overfitting. For graph contrastive learning, the temperature ˆτ is set at 0.07. |