Extractive and Abstractive Event Summarization over Streaming Web Text

Authors: Chris Kedzie, Kathleen McKeown

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We ran experiments in a crisis informatics type scenario... Our results show an improvement of at least 28.3% over the APSALIENCE and other baseline models in summary F1 performance and 43.8% when accounting for the timeliness of the summary content.
Researcher Affiliation Academia Chris Kedzie Dept. of Computer Science Columbia University kedzie@cs.columbia.edu Adviser: Kathleen Mc Keown Dept. of Computer Science Columbia University kathy@cs.columbia.edu
Pseudocode No The paper describes the algorithms verbally but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not state that the source code for the described methodology is publicly available, nor does it provide any links.
Open Datasets No The paper mentions using a 'multi-terabyte stream of online news articles' for experiments but does not provide any concrete access information (link, DOI, specific repository, or formal citation for a public dataset) for this data.
Dataset Splits No The paper describes processing data in 'hourly batches' but does not specify explicit training, validation, or test dataset splits (percentages, sample counts, or citations to predefined splits).
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) required for replication.
Experiment Setup No The paper describes the general process of the model (e.g., using affinity propagation clustering biased by salience predictions, feature types) but does not provide specific experimental setup details such as hyperparameter values (learning rate, batch size, epochs), optimizer settings, or other detailed training configurations.