Multi-Document Summarization Based on Two-Level Sparse Representation Model

Authors: He Liu, Hongliang Yu, Zhi-Hong Deng

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on summarization benchmark data sets DUC2006 and DUC2007 show that our proposed model is effective and outperforms the state-of-the-art algorithms.
Researcher Affiliation Academia He Liu, Hongliang Yu, Zhi-Hong Deng Key Laboratory of Machine Perception (Ministry of Education), School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China lhdgriver@gmail.com, yuhongliang324@gmail.com, zhdeng@cis.pku.edu.cn
Pseudocode Yes Algorithm 1 MDS-Sparse Algorithm; Algorithm 2 Sparse Coding(S , S)
Open Source Code No The paper mentions the use of third-party tools with URLs (splitta and Porter Stemmer) but does not state that the code for their own described methodology is open-source or provide a link to it.
Open Datasets Yes In this study, we use the standard summarization benchmark DUC2006 and DUC2007 for evaluation.
Dataset Splits No The paper mentions using DUC2006 and DUC2007 datasets but does not specify details regarding train/validation/test splits or explicitly mention a validation set.
Hardware Specification Yes The experiments were performed on a 2.4GHz PC machine (Intel Core2 P8600) with 4GB of memory, running on an Ubuntu12.04 operating system.
Software Dependencies No The paper mentions 'splitta' and 'porter stemming algorithm' with URLs but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup No The paper mentions general parameters like 'sparse coefficient λ' and 'correlation coefficient β' (with β = 1000 through experiments) and some internal algorithm convergence criteria, but does not provide detailed hyperparameters, optimizer settings, or a comprehensive experimental setup section.