Fast and Accurate Prediction of Sentence Specificity

Authors: Junyi Li, Ani Nenkova

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper we present a practical system for predicting sentence specificity which exploits only features that require minimum processing and is trained in a semi-supervised manner. Our system outperforms the state-of-the-art method for predicting sentence specificity and does not require part of speech tagging or syntactic parsing as the prior methods did.
Researcher Affiliation Academia Junyi Jessy Li and Ani Nenkova University of Pennsylvania Philadelphia, PA 19104 {ljunyi,nenkova}@seas.upenn.edu
Pseudocode Yes Algorithm 1 Co-training algorithm for predicting sentence specificity
Open Source Code No The paper provides a link to a 'tool' called SPECITELLER (http://www.cis.upenn.edu/~nlp/software/speciteller.html), but it does not explicitly state that this link provides the open-source code for the methodology described in the paper.
Open Datasets Yes To train our semi-supervised model for sentence specificity, we follow prior work (Louis and Nenkova 2011a) and use a repurposed corpus of binary annotations of specific and general sentences drawn from Wall Street Journal articles originally annotated for discourse analysis (Prasad et al. 2008). We then make use of unlabeled data for co-training. The unlabeled data is extracted from the Associated Press and New York Times portions of the Gigaword corpus (Graff and Cieri 2003), as well as Wall Street Journal articles from the Penn Treebank corpus.
Dataset Splits Yes The value of αi is determined via 10-fold cross validation on the labeled training data. We choose the lowest threshold for which the prediction accuracy of the classifier on sentences with posterior probability exceeding the threshold is greater than 85%. This thresholds turned out to be 0.8 for both classifiers.
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU, memory) used for running the experiments.
Software Dependencies No The paper mentions using the NLTK package and SRILM (Stolcke and others 2002), but it does not provide specific version numbers for these software dependencies, which are required for full reproducibility.
Experiment Setup Yes Here we set the values p = 1000, n = 1500. This 1:1.5 ratio is selected by tuning the accuracy of prediction on the initial discourse training data after 30,000 new examples are added. We impose a further constraint that the posterior probability of a new example given by Ci must be greater than a threshold αi. The value of αi is determined via 10-fold cross validation on the labeled training data. We choose the lowest threshold for which the prediction accuracy of the classifier on sentences with posterior probability exceeding the threshold is greater than 85%. This thresholds turned out to be 0.8 for both classifiers. We use 100 clusters for the results reported here.