Emotions in Argumentation: an Empirical Evaluation
Authors: Sahbi Benlamine, Maher Chaouachi, Serena Villata, Elena Cabrio, Claude Frasson, Fabien Gandon
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we assess this claim by means of an experiment: during several debates people s argumentation in plain English is connected and compared to the emotions automatically detected from the participants. Our results show a correspondence between emotions and argumentation elements... and To answer these questions, we propose an empirical evaluation of the connection between argumentation and emotions. This paper describes an experiment with human participants which studies the correspondences between the arguments and their relations put forward during a debate, and the emotions detected by emotions recognition systems in the debaters. |
| Researcher Affiliation | Academia | Sahbi Benlamine Univ. de Montreal, Canada benlamim@iro.umontreal.ca Maher Chaouachi Univ. de Montreal, Canada chaouacm@iro.umontreal.ca Serena Villata INRIA, France serena.villata@inria.fr Elena Cabrio INRIA, France elena.cabrio@inria.fr Claude Frasson Univ. de Montreal, Canada frasson@iro.umontreal.ca Fabien Gandon INRIA, France fabien.gandon@inria.fr |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The authors provide a publicly available dataset (http://bit.ly/TextualArgumentsDataset) and debriefing data (http://bit.ly/DebriefingData), but there is no explicit statement or link indicating that the source code for their methodology is open-source or publicly released. |
| Open Datasets | Yes | An important result is the development of a publicly available dataset capturing several debate situations, and annotating them with their argumentation structure and the emotional states automatically detected. and The datasets of textual arguments are available at http://bit.ly/Textual Arguments Dataset. |
| Dataset Splits | No | The paper describes the creation of a dataset composed of 598 different arguments and 263 argument pairs, and its annotation, but does not specify any explicit training, validation, or test dataset splits. There is no mention of percentages, sample counts for splits, or cross-validation setup. |
| Hardware Specification | Yes | Each participant has been provided with 1 laptop device equipped with internet access and a camera used to detect facial emotions. Moreover, each participant has been equipped with an EEG headset to detect engagement index. and Emotiv EPOC EEG headset. It has been used to record physiological data during the debate sessions. It contains 14 electrodes spatially organized according to International 10 20 system, moist with a saline solution (contact lens cleaning solution). As shown in Figure 1, the electrodes are placed at AF3, AF4, F3, F4, F7, F8, FC5, FC6, P7, P8, T7, T8, O1, O2, and two other reference sensors are placed behind the ears. The generated data are in (µV ) with a 128Hz sampling rate. The signal frequencies are between 0.2 and 60Hz. |
| Software Dependencies | Yes | For these reasons, in our experiments we have used webcams for facial expressions analysis relying on the FACEREADER 6.0 software and Face Reader 6.0 via a webcam. |
| Experiment Setup | Yes | The experimental setting of each debate is conceived as follows: there are 4 participants for each discussion group (each participant is placed far from the other participants, even if they are in the same room), and 2 moderators located in another room with respect to the participants. The moderators interact with the participants uniquely through the debate platform. The language used for debating is English. In order to provide an easy-to-use debate platform to the participants, without requiring from them any background knowledge, we decide to rely on a simple IRC network as debate platform. The debate is anonymous and participants are visible to each others with their nicknames, e.g., participant1, while the moderators are visualized as moderator1 and moderator2. Each participant has been provided with 1 laptop device equipped with internet access and a camera used to detect facial emotions. Moreover, each participant has been equipped with an EEG headset to detect engagement index. Each moderator used only a laptop. |