Multi-Modal Sarcasm Detection Based on Dual Generative Processes
Authors: Huiying Ma, Dongxiao He, Xiaobao Wang, Di Jin, Meng Ge, Longbiao Wang
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on publicly available multi-modal sarcasm detection datasets validate the superiority of our proposed model. |
| Researcher Affiliation | Collaboration | 1Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China 2Huiyan Technology (Tianjin) Co., Ltd, Tianjin, China 3Saw Swee Hock School of Public Health, National University of Singapore, Singapore |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link to its open-source code. |
| Open Datasets | Yes | We evaluate our model on the publicly available multi-modal sarcasm detection benchmark dataset collected from Twitter by [Cai et al., 2019]. |
| Dataset Splits | Yes | The dataset is divided into training, validation, and test sets, with an approximate ratio of 80%:10%:10%. Table 1 provides specific statistics regarding the dataset. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments, only mentioning general settings like "We trained our models using..." or "on a GPU" without specifics. |
| Software Dependencies | No | The paper mentions "Huggingface pre-trained BERT-base-uncased model" and "Adam optimizer" but does not specify version numbers for these software dependencies. |
| Experiment Setup | Yes | We employ the Adam optimizer with a learning rate of 1e-4. To prevent over-fitting, we apply a dropout rate of 0.1 for intra-modal learning, and early stopping techniques are utilized. The parameters and batch size are set to 0.8 and 32, respectively. |