Debiasing Multimodal Sarcasm Detection with Contrastive Learning
Authors: Mengzhao Jia, Can Xie, Liqiang Jing
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show the superiority of the proposed framework. We conducted experiments on a publicly available multimodal sarcasm detection benchmark dataset (Cai, Cai, and Wan 2019). |
| Researcher Affiliation | Academia | Mengzhao Jia1, Can Xie1, Liqiang Jing2* 1Shandong University, 2University of Texas at Dallas |
| Pseudocode | No | The paper does not contain a pseudocode block or a clearly labeled algorithm block. |
| Open Source Code | Yes | As a byproduct, we have released the source code and the constructed dataset4. 4https://sharecode.wixsite.com/dmsd-cl. |
| Open Datasets | Yes | We conducted experiments on a publicly available multimodal sarcasm detection benchmark dataset (Cai, Cai, and Wan 2019). The dataset comprises 24,635 samples, each comprising textual content, an associated image, and a corresponding label. As a byproduct, we have released the source code and the constructed dataset4. 4https://sharecode.wixsite.com/dmsd-cl. |
| Dataset Splits | Yes | Following the original setting, the sizes of the training set, development set, and testing set are 19,815, 2,410, and 2,409, respectively. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types) used for running the experiments, only mentioning pre-trained models and APIs like RoBERTa, ViT, and gpt3.5-turbo. |
| Software Dependencies | No | The paper mentions using 'RoBERTa-base', 'ViT', and 'gpt3.5-turbo' for various tasks, but does not provide specific version numbers for these software dependencies or other libraries. |
| Experiment Setup | Yes | Meanwhile, the hyperparameters K, τ, and λ are configured to 4, 0.07, and 0.9, respectively. We used the Adam optimizer to optimize our model with a learning rate of 1e 5. The mini-batch size is set to 16 and the maximum number of epochs for training is set to 20. |