Filling Conversation Ellipsis for Better Social Dialog Understanding

Authors: Xiyuan Zhang, Chengxi Li, Dian Yu, Samuel Davidson, Zhou Yu9587-9595

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance on two specific tasks: dialog act prediction and semantic role labeling. Our approach improves dialog act prediction and semantic role labeling by 1.3% and 2.5% in F1 score respectively in social conversations.
Researcher Affiliation Collaboration Xiyuan Zhang,1 Chengxi Li,1 Dian Yu,2 Samuel Davidson,2 Zhou Yu2 1Zhejiang University, 2University of California, Davis
Pseudocode No The paper describes the model components and steps but does not include any formal pseudocode or algorithm blocks.
Open Source Code No The annotated dataset is publicly available 1. https://gitlab.com/ucdavisnlp/filling-conversation-ellipsis (This only explicitly states the dataset is available, not the source code for the methodology itself.)
Open Datasets Yes We evaluate our Hybrid-EL-CMP on a dataset collected in our in-lab user studies with a social bot on the Alexa platform (Gunrock dataset) (Chen et al. 2018a). This dataset provides real human-machine social conversations... The annotated dataset is publicly available 1. https://gitlab.com/ucdavisnlp/filling-conversation-ellipsis
Dataset Splits Yes We use five-fold cross validation to conduct hyperparameter tuning of our models. Once we have identified the optimal hyperparameters, such as number of epochs and learning rate, we combine the validation and training data for final model training.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are provided in the paper.
Software Dependencies No The paper mentions software like Open NMT, BERT, and Bi-LSTMs, but does not provide specific version numbers for these software dependencies (e.g., 'Open NMT (Klein et al. 2017)' without a version number like 'Open NMT vX.Y').
Experiment Setup Yes For both models, the encoder and decoder are 2-layer LSTMs and we set the hidden state size to 500. The dropout rate is 0.3. ... The initial learning rate is 1. We train the model with the Adam optimizer. The initial learning rate is 5e-5.