TFCD: Towards Multi-modal Sarcasm Detection via Training-Free Counterfactual Debiasing

Authors: Zhihong Zhu, Xianwei Zhuang, Yunyan Zhang, Derong Xu, Guimin Hu, Xian Wu, Yefeng Zheng

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on two benchmarks demonstrate the effectiveness of the proposed TFCD. Remarkably, TFCD requires neither data balancing nor model modifications, and thus can be seamlessly integrated into diverse state-of-the-art approaches and achieve considerable improvement margins.
Researcher Affiliation Collaboration Zhihong Zhu1 , Xianwei Zhuang1 , Yunyan Zhang2 , Derong Xu3 , Guimin Hu4 , Xian Wu2 , Yefeng Zheng2 1Peking University 2Jarvis Research Center, Tencent You Tu Lab 3University of Science and Technology of China 4University of Copenhagen
Pseudocode No The paper describes methods using natural language and mathematical equations, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes We conduct experiments on two datasets: MMSD [Cai et al., 2019] and MMSD2.0 [Qin et al., 2023].
Dataset Splits Yes MMSD/MMSD2.0 Train Validation Test Sentences 19,816/19,816 2,410/2,410 2,409/2,409
Hardware Specification Yes All experiments are conducted on Nvidia Tesla V100 GPUs.
Software Dependencies No The proposed TFCD and five reproducible models are implemented on Py Torch [Paszke et al., 2017].
Experiment Setup Yes We have utilized grid search to determine the optimal values for the parameters α and β on the validation set. The grid search is performed with a step size of 0.1 and a range spanning from 0 to 1. For a fair comparison, the training settings (e.g., loss function, batch size, learning rate strategy, etc) of these models are consistent with the details reported in their original papers.