Fallacious Argument Classification in Political Debates

Authors: Pierpaolo Goffredo, Shohreh Haddadan, Vorakit Vorakitphan, Elena Cabrio, Serena Villata

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our results show the important role played by argument components and relations in this task. We propose a new transformer-based model architecture, fine-tuned on argumentation features, and we address an extensive evaluation obtaining very promising results.
Researcher Affiliation Academia 1Universit e Cˆote d Azur, CNRS, Inria, I3S, France 2University of Luxembourg, Luxembourg
Pseudocode No The paper includes a figure illustrating the pipeline for fallacious argument classification (Figure 1), but it does not provide any pseudocode or a clearly labeled algorithm block.
Open Source Code Yes 4https://github.com/pierpaologoffredo/IJCAI2022
Open Datasets Yes To investigate fallacious arguments in political debates, we extend the annotations of the Elec Deb60To16 dataset [Haddadan et al., 2019], that collects televised debates of the presidential election campaigns in the U.S. from 1960 to 2016. 31 debates of the Elec Deb60To16 dataset are fully annotated with fallacies4. 4https://github.com/pierpaologoffredo/IJCAI2022
Dataset Splits No The paper only specifies an '80% for training and the remaining 20% for testing' split, without explicitly mentioning a separate validation set.
Hardware Specification No The paper discusses software models and parameters but does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for experiments.
Software Dependencies Yes The implementation is based on the huggingface transformer5 using Py Torch (version 1.7.0).
Experiment Setup Yes The selected learning rate is 5e-5, dropout 0.1 and batch size 1 for Transformer-XL, and 8 for the rest. All transformer models apply Adam optimizer, dropout 0.1, and Cross Entropy as a loss function.