HARGAN: Heterogeneous Argument Attention Network for Persuasiveness Prediction

Authors: Kuo-Yu Huang, Hen-Hsen Huang, Hsin-Hsi Chen13045-13054

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our methodology achieves the state-of-the-art performance in persuasiveness prediction on the Change My View dataset.
Researcher Affiliation Academia 1 Department of Computer Science and Information Engineering, National Taiwan University, Taiwan 2 Department of Computer Science, National Chengchi University, Taiwan 3 MOST Joint Research Center for AI Technology and All Vista Healthcare, Taiwan kyhuang@nlg.csie.ntu.edu.tw, hhhuang@nccu.edu.tw, hhchen@ntu.edu.tw
Pseudocode No No pseudocode or clearly labeled algorithm block found.
Open Source Code Yes Our code is publicly available for the research community.2 2https://github.com/seasa2016/Heterogeneous Argument Attention Network
Open Datasets Yes Our study is conducted on the Change My View (CMV) dataset (Tan et al. 2016).
Dataset Splits Yes Train Dev Test # trees 969 241 311 # pairs 14922 3504 5013 Avg. turns 2.87 2.96 2.83 Table 1: Statistic of changemyview. We randomly split the training set into two parts, 80% of instances for training and 20% of instances for validation.
Hardware Specification No No specific hardware details (e.g., CPU/GPU models, memory) used for running experiments are mentioned.
Software Dependencies No The paper mentions BERT, ELMO, Bi-LSTM, and NLTK, but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes To prevent error propagation from the previous stage, we loosen the constraint for the structure to be a graph. That is to say, each ADU could link to more than one parent ADU. We compute the child-parent score between the current ADU and each candidate using Equation (4), and choose the top-k candidates to link with. In our experiment, k is set to three. ... where the activation function is elu (Clevert, Unterthiner, and Hochreiter 2016) and M, which is the number of attention head, is chosen to be 4 in our experiment. ... where α is set to be 0.01 in our experiment.