RDF-to-Text Generation with Graph-augmented Structural Neural Encoders
Authors: Hanning Gao, Lingfei Wu, Po Hu, Fangli Xu
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two different Web NLG datasets show that our proposed model outperforms the state-of-the-art baselines. |
| Researcher Affiliation | Collaboration | 1School of Computer Science, Central China Normal University, Wuhan, China 2IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA 3Hubei Provincial Key Laboratory of Artificial Intelligence and Smart Learning, Wuhan, China 4Squirrel AI Learning |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is publicly available for research purpose. 2https://github.com/Nicoleqwerty/RDF-to-Text. |
| Open Datasets | Yes | We use two different Web NLG datasets1 [Gardent et al., 2017a] which are designed for the task of mapping RDF triples to text. |
| Dataset Splits | Yes | The first dataset is the Web NLG 2017 challenge dataset, consisting of 18102 training pairs, 2268 validation pairs, and 2495 test pairs in 10 categories (Astronaut, Building, Monument, University, Sports Team, Written Work, etc.). The second supplementary dataset... consists of 31969 training pairs, 4030 validation pairs and 4222 test pairs. The second supplementary dataset contains 13867 training pairs, 1762 validation pairs, and 1727 test pairs, which does not overlap the first dataset. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU/CPU models or memory used for the experiments. |
| Software Dependencies | No | The paper mentions 'Adam [Kingma and Ba, 2014] as the optimization method' but does not provide specific version numbers for Adam or any other software libraries or frameworks used. |
| Experiment Setup | Yes | For model hyperparameters, we set 300-dimension source and target word embeddings and 300-dimension hidden state for bi GCN encoder, meta-paths encoder and decoder. We use Adam [Kingma and Ba, 2014] as the optimization method with an initial learning rate 0.001 and learnable parameters are updated every 64 instances. |