Compressive Document Summarization via Sparse Optimization
Authors: Jin-ge Yao, Xiaojun Wan, Jianguo Xiao
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Performance on DUC 2006 and DUC 2007 datasets shows that our compressive summarization results are competitive against the state-of-the-art results while maintaining reasonable readability. |
| Researcher Affiliation | Academia | Jin-ge Yao, Xiaojun Wan, Jianguo Xiao Institute of Computer Science and Technology, Peking University, Beijing 100871, China Key Laboratory of Computational Linguistic (Peking University), MOE, China {yaojinge, wanxiaojun, xiaojianguo}@pku.edu.cn |
| Pseudocode | Yes | Algorithm 1 An ADMM solver for Problem (1) Algorithm 2 A recursive procedure for sentence compression |
| Open Source Code | No | The paper references a third-party tool (MATE tool) with a URL, but does not provide a link or explicit statement about the availability of the authors' own implementation code for the proposed methods. |
| Open Datasets | Yes | To form direct comparisons with original formulations of data reconstruction based summarization [He et al., 2012], we run our experiments on exactly the same DUC 2006 and DUC 2007 datasets. |
| Dataset Splits | No | All other parameters involved in the optimization problems are tuned on a fraction of earlier DUC 2005 dataset for convenience. The paper does not provide specific details on the dataset splits (e.g., percentages or counts) for training, validation, or testing. |
| Hardware Specification | No | In the experiments, the time consumption of our methods is significantly less than the original reconstruction formulation with gradient descent algorithm. Even for the compressive case, the acceleration ratio achieves more than 60 under the same single machine computing environment. This only mentions "single machine computing environment" without specific hardware details. |
| Software Dependencies | No | We run the commonly used ROUGE (Recall-Oriented Understudy for Gisting Evaluation) 6 metrics for summarization tasks [Lin and Hovy, 2003; Lin, 2004]. and The dependency relations needed by the compressive summarization part are generated using the MATE tool 5, a fast and accurate dependency parser with the state-of-the-art performance [Bohnet, 2010]. No version numbers are provided for ROUGE or MATE tool. |
| Experiment Setup | Yes | The ϵ for sentence compression is set as 0.01 Ri for sentence i. All other parameters involved in the optimization problems are tuned on a fraction of earlier DUC 2005 dataset for convenience. |