Table-to-Text Generation by Structure-Aware Seq2seq Learning
Authors: Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, Zhifang Sui
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on the WIKIBIO dataset which contains over 700k biographies and corresponding infoboxes from Wikipedia. ... Automatic evaluations also show our model outperforms the baselines by a great margin. |
| Researcher Affiliation | Academia | Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, Zhifang Sui Key Laboratory of Computational Linguistics, Ministry of Education, School of Electronics Engineering and Computer Science, Peking University, Beijing, China {tianyu0421, wkx, shalei, chbb, szf}@pku.edu.cn |
| Pseudocode | No | The paper includes mathematical equations and architectural diagrams but no structured pseudocode or algorithm blocks labeled as such. |
| Open Source Code | Yes | Code for this work is available on https://github.com/tyliupku/wiki2bio. |
| Open Datasets | Yes | We use WIKBIO dataset proposed by Lebret, Grangier, and Auli (2016) as the benchmark dataset. WIKBIO contains 728,321 articles from English Wikipedia (Sep 2015). |
| Dataset Splits | Yes | The corpus has been divided in to training (80%), testing (10%) and validation (10%) sets. |
| Hardware Specification | No | The paper discusses model parameters and experimental setup (e.g., word dimension, hidden size, batch size, optimizer) but does not provide specific hardware details like GPU/CPU models or other computing infrastructure used for experiments. |
| Software Dependencies | Yes | We use the Ken LM toolkit to train 5-gram models without pruning. |
| Experiment Setup | Yes | The detail of model parameters is listed in Table 2. Word dimension 400 Field dimension 50 Position dimension 5 Hidden size 500 Batch size 32 Learning rate 0.0005 Optimizer Adam |