Topic-to-Essay Generation with Neural Networks
Authors: Xiaocheng Feng, Ming Liu, Jiahao Liu, Bing Qin, Yibo Sun, Ting Liu
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results show that our approach obtains much better BLEU-2 score compared to various baselines. |
| Researcher Affiliation | Academia | Xiaocheng Feng, Ming Liu, Jiahao Liu, Bing Qin , Yibo Sun, Ting Liu Harbin Institute of Technology, China |
| Pseudocode | No | The paper describes its models and approaches using mathematical equations and figures, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and data are available at: https://github.com/hitcomputer/MTA-LSTM. |
| Open Datasets | Yes | Our code and data are available at: https://github.com/hitcomputer/MTA-LSTM. In the end, we obtain 305,000 paragraph-level essays and randomly select 300,000 as training set and 5,000 as test set. We name this dataset as ESSAY. |
| Dataset Splits | Yes | In the end, we obtain 305,000 paragraph-level essays and randomly select 300,000 as training set and 5,000 as test set. We name this dataset as ESSAY. For Zhi Hu, we select 50,000 articles as training data and 5,000 articles as test data. |
| Hardware Specification | No | The paper describes model architecture parameters (e.g., '800 hidden units') and training algorithms, but it does not specify any hardware details such as GPU or CPU models used for the experiments. |
| Software Dependencies | No | The paper mentions using Word2vec [Mikolov et al., 2013], Language Technology Platform [Che et al., 2010], and the Ada Delta algorithm [Zeiler, 2012], but it does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | The word embedding dimensionality is 300 and initialized by Word2vec [Mikolov et al., 2013]. ... The recurrent hidden layers of the two-layer LSTM model contains 800 hidden units. Parameters of our model were randomly initialized over a uniform distribution with support [-0.04,0.04]. The model was trained with the Ada Delta algorithm [Zeiler, 2012], where the minibatch was set to be 32. Specifically, in the testing phase, we use the beam search (beam=2) to generate diverse text. |