Text Revision By On-the-Fly Representation Optimization

Authors: Jingjing Li, Zichao Li, Tao Ge, Irwin King, Michael R. Lyu10956-10964

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The empirical experiments on two typical and important text revision tasks, text formalization and text simplification, show the effectiveness of our approach.
Researcher Affiliation Collaboration Jingjing Li1, Zichao Li2, Tao Ge3, Irwin King1, Michael R. Lyu1 1The Chinese University of Hong Kong 2Mila/Mc Gill University 3Microsoft Research Asia
Pseudocode Yes Algorithm 1: Text revision with OREO
Open Source Code Yes Our code and model are released at https://github.com/jingjingli01/OREO.
Open Datasets Yes Based on the widely used corpora Newsela (Xu, Callison-Burch, and Napoles 2015), Jiang et al. (2020) constructs a reliable corpus consisting of 666K complex-simple sentence pairs1. 1Dataset available at https://github.com/chaojiang06/wiki-auto. ... We experimented with the domain of Family & Relationships in Grammarly s Yahoo Answers Formality Corpus (GYAFC-fr) (Rao and Tetreault 2018).
Dataset Splits Yes The final dataset consists of 269K train, 28K development and 29K test sentences. ... There are 100K, 5K and 2.5K informal-formal2 pairs in GYAFC.
Hardware Specification Yes It takes 8-GPU hours to fine-tune Ro BERTa on one Tesla V100 for both tasks.
Software Dependencies No The paper states, 'We implement Ro BERTa based on Huggingface transformers (Wolf et al. 2020).' While it names a library and cites a paper for it, it does not provide specific version numbers for the software dependencies needed for replication.
Experiment Setup Yes We primarily adopted the default hyperparameters with a fixed learning rate of 5e-5. The numbers of fine-tuning epochs are 6 and 2 for text simplification and formalization, respectively. ... The maximum iteration I was set to 4... λ was selected from {0.8, 1.2, 1.6, 2.0} and set to 1.6. ... The attribute threshold δ is task-dependent. It was selected from from {0.1, 0.2, . . . , 0.5} and set to 0.5 for text simplification and 0.3 for text formalization. K = 1 for both tasks.