Graph Transformer for Graph-to-Sequence Learning

Authors: Deng Cai, Wai Lam7464-7471

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the applications of text generation from Abstract Meaning Representation (AMR) and syntax-based neural machine translation show the superiority of our proposed model.
Researcher Affiliation Academia Deng Cai, Wai Lam The Chinese University of Hong Kong thisisjcykcd@gmail.com, wlam@se.cuhk.edu.hk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code available at https://github.com/jcyk/gtos.
Open Datasets Yes For this AMR-to-text generation task, we use two benchmarks, namely the LDC2015E86 dataset and the LDC2017T10 dataset. Both the English-German and the English-Czech datasets from the WMT16 translation task. http://www.statmt.org/wmt16/translation-task.html
Dataset Splits Yes Table 1: Data statistics of all four datasets. #train/dev/test indicates the number of instances in each set...
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions software components like 'Adam optimizer' and implies 'Python' for the code, but does not provide specific version numbers for any libraries, frameworks, or programming languages used (e.g., PyTorch 1.x, Python 3.x).
Experiment Setup Yes Table 2: Hyper-parameters settings. (Table includes number of filters, width of filters, embedding sizes, number of heads, hidden state sizes, feed-forward hidden size). Also mentions beam size of 8, dropout rate of 0.2, UNK token rate of 0.33, Adam optimizer with β1 = 0.9 and beta2 = 0.999, and the same learning rate schedule of Vaswani et al.(2017).