Reward Learning for Efficient Reinforcement Learning in Extractive Document Summarisation

Authors: Yang Gao, Christian M. Meyer, Mohsen Mesgar, Iryna Gurevych

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we evaluate our approach on extractive multi-document summarisation. We show that RELIS reduces the training time by two orders of magnitude compared to the state-of-the-art models while performing on par with them.
Researcher Affiliation Academia Yang Gao1 , Christian Meyer2 , Mohsen Mesgar2 and Iryna Gurevych2 1Dept. of Computer Science, Royal Holloway, University of London 2Ubiquitous Knowledge Processing Lab (UKP-TUDA), Technische Universit at Darmstadt
Pseudocode No The paper describes algorithms and methods but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Source code and supplementary material are available at https://github.com/UKPLab/ijcai2019-relis.
Open Datasets Yes We evaluate RELIS for extractive multi-document summarisation on three benchmark datasets from the Document Understanding Conferences (DUC)2 described in Table 1. 2https://duc.nist.gov/
Dataset Splits Yes To decide the best parameters, we perform 10-fold cross validation on DUC 01. In each run in the leave-one-out experiments, we randomly select 30% data from the training set as the dev set, and select the model with the best performance on the dev set.
Hardware Specification Yes We run RELIS, SRSum, Deep TD and REAPER on the same workstation with a 4-core CPU, 8 GB memory and no GPUs.
Software Dependencies No The paper mentions software like Adam, Infer Sent, and DQN-based RL summariser, but it does not specify version numbers for these software components or any other libraries.
Experiment Setup Yes We use Adam with initial learning rate 10^-2. The number of epochs is 10 and batch size is 2.