Microsummarization of Online Reviews: An Experimental Study

Authors: Rebecca Mason, Benjamin Gaska, Benjamin Van Durme, Pallavi Choudhury, Ted Hart, Bill Dolan, Kristina Toutanova, Margaret Mitchell

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In an end-to-end evaluation, we find our best-performing system is vastly preferred by judges over a traditional extractive summarization approach.
Researcher Affiliation Collaboration Rebecca Mason Google, Inc. Cambridge, Massachusetts ramason@google.com Benjamin Gaska University of Arizona Tucson, Arizona bengaska@email.arizona.edu Benjamin Van Durme Johns Hopkins University Baltimore, Maryland vandurme@cs.jhu.edu Pallavi Choudhury, Ted Hart, Bill Dolan, Kristina Toutanova, and Margaret Mitchell Microsoft Research Redmond, Washington {pallavic,tedhar,billdol,kristout,memitc}@microsoft.com
Pseudocode No The paper describes algorithms and methods textually but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not provide an explicit statement or link for the public release of its source code.
Open Datasets No The paper states that data was collected from Foursquare.com, but it does not provide concrete access information (link, DOI, repository, or citation) for this specific collected dataset to be publicly available.
Dataset Splits No While the paper mentions training and testing data for sentiment analysis (9617 for training, 954 for testing), it does not specify a separate validation dataset split for this or the main experimental setup. Thus, a full train/validation/test split for reproducibility is not provided.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory specifications) used for running its experiments.
Software Dependencies No The paper mentions software components and techniques (e.g., 'neural-network based sentiment model', 'word2vec'), but it does not provide specific version numbers for any software dependencies.
Experiment Setup No The paper describes some aspects of the experimental setup, such as optimal settings for word2vec (40 dimensions, CBOW, window size 1), but it lacks comprehensive details on hyperparameters (e.g., learning rate, batch size) or other system-level training settings for the primary models.