KG-BART: Knowledge Graph-Augmented BART for Generative Commonsense Reasoning
Authors: Ye Liu, Yao Wan, Lifang He, Hao Peng, Philip S. Yu6418-6425
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark Common Gen dataset verify the effectiveness of our proposed approach by comparing with several strong pre-trained language generation models |
| Researcher Affiliation | Academia | 1University of Illinois at Chicago, Chicago, IL, USA 2Huazhong University of Science and Technology, Wuhan, China 3 Lehigh University, Bethlehem, PA, USA 4Beihang University, Beijing, China |
| Pseudocode | No | The paper describes methods using textual descriptions and diagrams (Figures 2, 3, 4), but does not contain a formal pseudocode or algorithm block. |
| Open Source Code | Yes | 1Our code is available at https://github.com/yeliu918/KG-BART |
| Open Datasets | Yes | Dataset Common Gen (Lin et al. 2020) is a constrained text generation task, which is to explicitly test the ability of machines on commonsense reasoning when generating a text. The dataset released in this task is constructed through a combination of crowdsourced and existing caption corpora, which consists of 77k commonsense descriptions over 35k unique concept sets. |
| Dataset Splits | Yes | Train Dev Test # Concept sets 32,651 993 1,497 # Sentences 67,389 4,018 6,042 |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments. |
| Software Dependencies | No | The paper mentions various models and techniques (e.g., GPT-2, Uni LM, T5, BART, RoBERTa, TransE, GloVe, CNN) but does not provide specific version numbers for software dependencies or libraries required for replication. |
| Experiment Setup | No | The paper describes the model architecture and pre-training objectives, but does not explicitly state specific hyperparameters (e.g., learning rate, batch size, number of epochs) or detailed training configurations used in their experiments. |