Abstractive Text Summarization by Incorporating Reader Comments

Authors: Shen Gao, Xiuying Chen, Piji Li, Zhaochun Ren, Lidong Bing, Dongyan Zhao, Rui Yan6399-6406

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on our large-scale real-world text summarization dataset, and the results show that RASG achieves the stateof-the-art performance in terms of both automatic metrics and human evaluations.
Researcher Affiliation Collaboration 1Institute of Computer Science and Technology, Peking University, Beijing, China 2Center for Data Science, Peking University, Beijing, China 3Tencent AI Lab, Shenzhen, China 4JD.com, Beijing, China 5R&D Center Singapore, Machine Intelligence Technology, Alibaba DAMO Academy
Pseudocode No The paper describes its model and methods using prose and mathematical equations but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper states: 'We release our large-scale dataset for further research1.' with footnote 1 providing the URL 'http://t.cn/EAH5Jx'. However, it does not explicitly state that the source code for the methodology is released or provide a link to the code.
Open Datasets Yes We release a large scale abstractive text summarization dataset associated with reader comments.
Dataset Splits No The paper states, 'In total, our training dataset contains 863826 training samples,' but does not explicitly provide information on how this dataset is split into training, validation, and test sets for reproduction of the experiments.
Hardware Specification Yes We implement our experiments in Tensor Flow (Abadi et al. 2016) on an NVIDIA P40 GPU.
Software Dependencies No The paper mentions 'Tensor Flow (Abadi et al. 2016)' but does not provide a specific version number for TensorFlow or any other key software dependencies required for replication.
Experiment Setup Yes The word embedding dimension is set to 256 and the number of hidden units is 512. We set the k = 5 in the Equation 17 and ϕ = 0.5 in Equation 23 and 24. We use Adagrad optimizer (Duchi, Hazan, and Singer 2010) as our optimizing algorithm. We employ beam search with beam size 5 to generate more fluency summary sentence.