Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Show Me How To Revise: Improving Lexically Constrained Sentence Generation with XLNet

Authors: Xingwei He, Victor O.K. Li12989-12997

AAAI 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results have demonstrated that our proposed model performs much better than the previous work in terms of sentence fluency and diversity.
Researcher Affiliation Academia Xingwei He, Victor O.K. Li Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China EMAIL, EMAIL
Pseudocode Yes Algorithm 1 Constrained Sentence Generation with XLNet
Open Source Code Yes Our code, pre-trained models and Appendix are available at https://github.com/NLPCode/MCMCXLNet.
Open Datasets Yes We used One-Billion-Word corpus1 to construct the synthetic dataset. 1http://www.statmt.org/lm-benchmark/
Dataset Splits Yes We selected 6M, 0.3M and 1K sentences from One-Billion-Word corpus as the training, validation, and test sets, respectively.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments.
Software Dependencies No The paper mentions 'Hugging Face' and 'GPT-2 small (117M)' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes The experiment setups for language models and the classifier are shown in the Appendix. We set N to 20. We set K to 50. All MCMC-based models were run for 200 steps. We ran 20 iterations for Bayesian MCMC, TIGS, L-MCMC, and L-MCMC-C.