Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Incremental Extractive Opinion Summarization Using Cover Trees
Authors: Somnath Basu Roy Chowdhury, Nicholas Monath, Kumar Avinava Dubey, Manzil Zaheer, Andrew McCallum, Amr Ahmed, Snigdha Chaturvedi
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, on a diverse collection of data (both real and synthetically created to illustrate scaling considerations), we demonstrate that Cover Summ is up to 36x faster than baseline methods, and capable of adapting to nuanced changes in data distribution. We also conduct human evaluations of the generated summaries and find that Cover Summ is capable of producing informative summaries consistent with the underlying review set. |
| Researcher Affiliation | Collaboration | Somnath Basu Roy Chowdhury1, Nicholas Monath2, Kumar Avinava Dubey3, Manzil Zaheer2, Andrew Mc Callum2, Amr Ahmed3, Snigdha Chaturvedi1 1UNC Chapel Hill, 2Google Deep Mind, 3Google Research EMAIL EMAIL |
| Pseudocode | Yes | Algorithm 1 Cover Summ Algorithm Algorithm 2 Cover Tree Reservoir Search Algorithm 3 Synthetic LDA data generation Algorithm 4 Cover Summ Deletion Routine |
| Open Source Code | Yes | Code available here: https://github.com/brcsomnath/Cover Summ. |
| Open Datasets | Yes | We also perform experiments on two real-world datasets: (a) Space dataset (Angelidis et al., 2021) has 3K hotel reviews per entity with a total of 50K review sentences. (b) Amazon US reviews (He & Mc Auley, 2016) have product reviews along with their temporal order. |
| Dataset Splits | No | We perform a grid search on a held-out development set for each dataset. |
| Hardware Specification | Yes | The experiments were run on a single Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz processor. |
| Software Dependencies | No | We implemented all our experiments in Python 3.6 on a Linux server. In our experiments, we use the SG Tree implementation available in graphgrove library. |
| Experiment Setup | Yes | The summary budget in all experiments was k = 20. In our experiments, we set δ = 1/t and perform a grid search on a small held-out set to determine α, for each dataset. We used the default set of hyperparameters available with HNSW and FAISS. |