An Affect-Rich Neural Conversational Model with Biased Attention and Weighted Cross-Entropy Loss

Authors: Peixiang Zhong, Di Wang, Chunyan Miao7492-7500

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present the results of applying our techniques on text, graph, and text-and-graph based models; and discuss the implications of using external knowledge to solve the NLI problem. Our model achieves close to state-of-the-art performance for NLI on the Sci Tail science questions dataset.
Researcher Affiliation Collaboration Department of Computer Science, University of Illinois at Urbana-Champaign Urbana-Champaign, IL, USA xiaoyan5@illinois.edu IBM T.J. Watson Research Center, IBM Research, Yorktown Heights, NY, USA {ramusa, kapanipa, yum, krtalamad, achille, witbrock}@us.ibm.com {ibrahim.abdelaziz1, maria.chang, bassem.makni, n.mattei}@ibm.com
Pseudocode No The paper describes mathematical formulas and model components but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions using the Allen NLP library to implement models but does not provide a link or explicit statement for their own source code.
Open Datasets Yes We use the Sci Tail dataset (Khot, Sabharwal, and Clark 2018), which is a textual entailment dataset derived from publicly released science domain multiple choice question answering datasets (Welbl, Liu, and Gardner 2017; Clark et al. 2016). The hypothesis is created using the question and correct answer from the options; the premise is retrieved from the ARC corpus (data.allenai.org/arc/arc-corpus.
Dataset Splits No The paper states results on a 'dev set' and 'test set' but does not specify the exact split percentages or sample counts for training, validation, and test datasets. It mentions the total dataset size (27,026 sentence pairs) but not the distribution into splits.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions software like 'Allen NLP', 'Adagraph', 'DBpedia Spotlight', 'Spacy', and 'Open KE', but does not provide specific version numbers for these software dependencies, which are necessary for full reproducibility.
Experiment Setup Yes All words in the text model are initialized by 300D Glove vectors (Glove 840B 300D) (nlp.stanford.edu/projects/ glove), and the concepts that act as the input for the graph model are initialized by 300D Concept Net PPMI vectors (Speer, Chin, and Havasi 2017); these are openly available for Concept Net. We use the pre-trained embeddings without any fine tuning. We have adapted match-LSTM with GRUs as our text and graph based model. The system is trained by Adagraph with a learning rate of 0.001, and batch size of 40. Both the text and graph based models are trained jointly.