Parsing Natural Language Conversations using Contextual Cues
Authors: Shashank Srivastava, Amos Azaria, Tom Mitchell
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We formulate semantic parsing of conversations as a structured prediction task, incorporating structural features that model the flow of discourse across sequences of utterances. We create a dataset for semantic parsing of conversations, consisting of 113 real-life sequences of interactions of human users with an automated email assistant. The data contains 4759 natural language statements paired with annotated logical forms. Our approach yields significant gains in performance over traditional semantic parsing. ... Table 2 compares the performance of variations of our method for semantic parsing with conversational context (SPCon) with baselines on the held-out test set of conversational sequences. |
| Researcher Affiliation | Academia | Shashank Srivastava Carnegie Mellon University ssrivastava@cmu.edu Amos Azaria Ariel University amos.azaria@ariel.ac.il Tom Mitchell Carnegie Mellon University tom.mitchell@cmu.edu |
| Pseudocode | No | The paper describes the inference procedure and model in prose, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | We created a dataset of real-life user conversations in an email assistant environment. ... For evaluation and future comparisons, we split the data into a training fold (93 conversation sequences) and a test fold (20 conversation sequences). The paper does not provide access information (e.g., URL, DOI) for the newly created dataset. |
| Dataset Splits | No | For training our models, we tune parameters, i.e. number of training epochs (5), and the number of clusters (K = 3) through 10-fold cross-validation on the training data. While cross-validation is mentioned for tuning, the paper does not specify a distinct, fixed validation dataset split used for evaluating the final model. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper mentions using a CCG-based semantic parsing approach and the PAL lexicon induction algorithm, but does not provide specific version numbers for any software or libraries. |
| Experiment Setup | Yes | For training our models, we tune parameters, i.e. number of training epochs (5), and the number of clusters (K = 3) through 10-fold cross-validation on the training data. |