Reader-Aware Multi-Document Summarization via Sparse Coding

Authors: Piji Li, Lidong Bing, Wai Lam, Hang Li, Yi Liao

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on this data set and some classical data sets demonstrate the effectiveness of our proposed approach. (Abstract) 3 Experiments
Researcher Affiliation Collaboration Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA USA Noah s Ark Lab, Huawei Technologies, Hong Kong {pjli, wlam, yliao}@se.cuhk.edu.hk, lbing@cs.cmu.edu, hangli.hl@huawei.com
Pseudocode Yes Algorithm 1 Coordinate descent algorithm for sentence expressiveness detection
Open Source Code No The paper does not provide any link or statement indicating that the source code for their methodology is publicly available.
Open Datasets Yes In this work, we also generate a data set for conducting RA-MDS. Extensive experiments on our data set and some benchmark data sets have been conducted to examine the efficacy of our framework. (Section 1) DUC. In order to show that our sparse coding based framework can also work well on traditional MDS task, we employ the benchmark data sets DUC 2006 and DUC 2007 for evaluation. (Section 3.1)
Dataset Splits No Our data set... We also have a separate development (tuning) set containing 24 topics and each topic has one model summary. (Section 3.1) While a development set is mentioned, specific train/validation/test percentages or sample counts for the new dataset or DUC datasets are not provided.
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU models, memory, or cloud resources) used for running the experiments.
Software Dependencies Yes In the implementation, we use a package called lp solve2. (Footnote 2: http://lpsolve.sourceforge.net/5.5/)
Experiment Setup Yes Parameter settings. We set C = 0.8 and p = 4 in the position weight function. For the sparse coding model, we set the stopping criteria T = 300, ε = 10 4, and the learning rate η = 1. For the sparsity item penalty, we set λ = 0.005.