Weakly-Supervised Hierarchical Models for Predicting Persuasive Strategies in Good-faith Textual Requests

Authors: Jiaao Chen, Diyi Yang12648-12656

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results showed that our proposed method outperformed existing semi-supervised baselines significantly.
Researcher Affiliation Academia Jiaao Chen, Diyi Yang School of Interactive Computing Georgia Institute of Technology {jchen896,dyang888}@gatech.edu
Pseudocode No The paper describes the model architecture and training process in text, but no formal pseudocode or algorithm blocks are provided.
Open Source Code Yes We have publicly released our code at https://github.com/GT-SALT/Persuasion Strategy WVAE.
Open Datasets No The paper describes the creation of a new multi-domain text corpus but does not provide concrete access information (e.g., specific link, DOI, repository name, or formal citation) for the dataset itself.
Dataset Splits Yes Table 3: Split statistics about train, dev, and test set. Dataset Train Dev Test Borrow 900 400 400 RAOP 300 200 300 Kiva 1000 400 400
Hardware Specification No We acknowledge the support of NVIDIA Corporation with the donation of GPU used for this research.
Software Dependencies No The paper mentions software components like NLTK, BERT, LSTM, MLP, and Adam W, but does not specify their version numbers or other specific software dependencies required for replication.
Experiment Setup No The paper states, 'tuned hyper-parameters on the development set' and refers to an appendix for 'Parameters details', but these details are not provided within the main text of the paper given.