Learning towards Abstractive Timeline Summarization
Authors: Xiuying Chen, Zhangming Chan, Shen Gao, Meng-Hsuan Yu, Dongyan Zhao, Rui Yan
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on a large-scale real-world dataset, and the results show that MTS achieves the state-of-the-art performance in terms of both automatic and human evaluations. |
| Researcher Affiliation | Academia | 1Center for Data Science, Peking University, Beijing, China 2Institute of Computer Science and Technology, Peking University, Beijing, China |
| Pseudocode | No | The paper describes the model architecture and components in text and diagrams, but does not include structured pseudocode or an algorithm block. |
| Open Source Code | No | We also release the first real-world large-scale timeline summarization dataset1. 1http://tiny.cc/lfh56y |
| Open Datasets | Yes | We also release the first real-world large-scale timeline summarization dataset1. 1http://tiny.cc/lfh56y |
| Dataset Splits | Yes | In total, our training dataset amounts to 169,423 samples with 5,000 evaluation and 5,000 test samples. |
| Hardware Specification | Yes | We implement our experiments in Tensor Flow [Abadi et al., 2016] on NVIDIA GTX 1080 Ti GPU. |
| Software Dependencies | No | We implement our experiments in Tensor Flow [Abadi et al., 2016] on NVIDIA GTX 1080 Ti GPU. |
| Experiment Setup | Yes | The word embedding dimension is set to 128 and the number of hidden units is 256. For time-event memory, the dimension of key, global value, and local value is 128, 512, and 256 respectively. We initialize all of the parameters randomly using an uniform distribution in [-0.02, 0.02]. The batch size is set to 16, and the event number is set to 8. We use Adagrad optimizer [Duchi et al., 2010] as our optimizing algorithm and the learning rate is 0.15. In decoding, we employ beam search with beam size 4 to generate more fluency summary sentence. |