Text Summarization with Oracle Expectation

Authors: Yumo Xu, Mirella Lapata

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on summarization benchmarks show that OREO outperforms comparison labeling schemes in both supervised and zero-shot settings, including cross-domain and cross-lingual tasks.
Researcher Affiliation Academia Yumo Xu & Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB yumo.xu@ed.ac.uk, mlap@inf.ed.ac.uk
Pseudocode Yes Algorithm 1 Labeling with Oracle Expectation
Open Source Code Yes Our code and models can be found at https://github.com/yumoxu/oreo.
Open Datasets Yes We report experiments on a variety of summarization datasets including CNN/DM (Hermann et al., 2015), XSum (Narayan et al., 2018b), Multi-News (Fabbri et al., 2019), Reddit (Kim et al., 2019), and Wiki How (Koupaee & Wang, 2018). ...We used the datasets as preprocessed by Zhong et al. (2020) which can be accessed at: https: //github.com/maszhongming/matchsum.
Dataset Splits Yes Detailed statistics are shown in Table 2. ... #Train 287,084 #Validation 13,367 #Test 11,489
Hardware Specification Yes We used three Ge Force RTX 2080 GPUs for model training and bert.base in our experiments.
Software Dependencies No The paper mentions using Python packages like 'pyrouge' and 'spacy' and tools like 'file2rouge', but it does not specify the version numbers for any of these software components.
Experiment Setup Yes We set the batch size to 4, and accumulated gradients every 32 steps. Following Jia et al. (2022), we used word replacement rate of 0.5 to learn cross-lingual representation alignment. We fine-tuned models on the English data with a learning rate of 2 10 3 for 50,000 optimization steps, and a warm-step of 10,000.