Towards Understanding Natural Language: Semantic Parsing, Commonsense Knowledge Acquisition and Applications

Authors: Arpit Sharma

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the steps we took towards the goal and the tools/techniques we developed, such as a semantic parser and a novel algorithm to automatically acquire commonsense knowledge from text. We also show the usefulness of the developed tools by applying them to solve tasks such as hard coreference resolution.
Researcher Affiliation Academia Arpit Sharma Arizona State University asharm73@asu.edu
Pseudocode No The paper describes algorithms and implementations but does not include any explicit pseudocode blocks or formally labeled algorithm sections.
Open Source Code No The paper provides links to an online GUI for the parser (www.kparser.org) and a demo of extracted knowledge (http://bioai8core.fulton.asu.edu/knet), which are demonstrations or online interfaces, not direct links to the source code of the described methodology.
Open Datasets Yes This type of knowledge is proved helpful in solving a subset of the Winograd Schema Challenge (WSC) [Sharma et al., 2015a], which is a hard co-reference resolution challenge.
Dataset Splits No The paper mentions using the Winograd Schema Challenge (WSC) and extracting knowledge from a large text repository, but it does not specify any dataset splits for training, validation, or testing (e.g., percentages or sample counts).
Hardware Specification No The paper does not provide any specific details regarding the hardware used for running the experiments (e.g., CPU, GPU models, or memory specifications).
Software Dependencies No The paper mentions using 'logic programming (Answer Set Programming1) based reasoning agent', and provides a footnote link to 'http://potassco.sourceforge.net/teaching.html' which is a teaching resource. However, it does not specify version numbers for any software components, libraries, or solvers used in the experiments.
Experiment Setup No The paper does not provide specific details about the experimental setup, such as hyperparameter values, learning rates, batch sizes, or optimizer settings used during training or evaluation.