Sentence Generation for Entity Description with Content-Plan Attention

Authors: Bayu Trisedya, Jianzhong Qi, Rui Zhang9057-9064

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our model outperforms stateof-the-art baselines by up to 3% and 5% in terms of BLEU score on two real-world datasets, respectively.
Researcher Affiliation Academia School of Computing and Information Systems, The University of Melbourne btrisedya@student.unimelb.edu.au, {jianzhong.qi, rui.zhang}@unimelb.edu.au
Pseudocode No The paper describes methods using mathematical equations and prose but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Code and dataset: http://www.ruizhang.info/GKB/gkb.htm
Open Datasets Yes We evaluate our model on two real-world datasets, including WIKIALL and WIKIBIO datasets. [...] The collected dataset contains 152, 231 triples of attributes, content-plan, and description (we call it the WIKIALL dataset). [...] For benchmarking, we also use the WIKIBIO dataset (Lebret, Grangier, and Auli 2016) which contains 728,321 biographies from Wikipedia. [...] Code and dataset: http://www.ruizhang.info/GKB/gkb.htm
Dataset Splits Yes We split each dataset into train set (80%), dev set (10%) and test set (10%).
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running experiments are provided in the paper.
Software Dependencies No No specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment are provided.
Experiment Setup No The paper describes the model and training objectives but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings.