Order-Planning Neural Text Generation From Structured Data

Authors: Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, Zhifang Sui

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted experiments on the WIKIBIO dataset and achieve higher performance than previous methods in terms of BLEU, ROUGE, and NIST scores; we also performed ablation tests to analyze each component of our model.
Researcher Affiliation Academia Key Laboratory of Computational Linguistics, Ministry of Education; School of EECS, Peking Univeristy David R. Cheriton School of Computer Science, University of Waterloo
Pseudocode No The paper contains architectural diagrams and mathematical equations but no structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://sites.google.com/site/orderplanningnlg/
Open Datasets Yes We used the newly published WIKIBIO dataset (Lebret, Grangier, and Auli 2016),4 which contains 728,321 biographies from Wiki Project Biography5 (originally from English Wikipedia, September 2015). 4https://github.com/DavidGrangier/wikipedia-biography-dataset
Dataset Splits Yes We applied the standard data split: 80% for training and 10% for testing, except that model selection was performed on a validaton subset of 1000 samples (based on BLEU-4).
Hardware Specification No No specific hardware details (like CPU/GPU models, memory, or specific computing environments) used for running experiments were provided.
Software Dependencies No The paper mentions 'Adam' as the optimization algorithm but does not specify any software libraries or their version numbers.
Experiment Setup Yes In our experiments, both words and table fields embeddings were 400-dimensional and LSTM layers were 500-dimensional. We used Adam (Kingma and Ba 2015) as the optimization algorithm with a batch size of 32; other hyperparameters were set to default values.