SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning
Authors: Aaron Chan, Jiashu Xu, Boyuan Long, Soumya Sanyal, Tanishq Gupta, Xiang Ren
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On three commonsense QA benchmarks (CSQA, OBQA, CODAH) and a range of KG-augmented models, we show that SALKG can yield considerable performance gains up to 2.76% absolute improvement on CSQA. |
| Researcher Affiliation | Academia | University of Southern California, IIT Delhi {chanaaro, boyuanlo, jiashuxu, soumyasa, xiangren}@usc.edu, Tanishq.Gupta.mt617@maths.iitd.ac.in |
| Pseudocode | No | The paper does not contain any sections or figures explicitly labeled 'Pseudocode' or 'Algorithm', nor are there any structured code-like blocks outlining procedures. |
| Open Source Code | Yes | Code and data are available at: https://github.com/INK-USC/Sal KG. |
| Open Datasets | Yes | We use the CSQA [52] and OBQA [39] multi-choice QA datasets. ... As in prior works, we use the Concept Net [49] KG for both datasets. |
| Dataset Splits | No | For CSQA, we use the accepted in-house data split from [31], as the official test labels are not public. |
| Hardware Specification | No | The paper does not specify the exact hardware components (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like BERT, RoBERTa, and PyTorch, but it does not specify their version numbers or other ancillary software dependencies with specific versioning for reproducibility. |
| Experiment Setup | Yes | We use thresholds T = 0.01 and k = 10 for coarse and fine explanations, respectively. For text encoders, we use BERT(-Base) [11] and Ro BERTa(-Large) [35]. For graph encoders, we use MHGRN [13], Path Gen [56], and Relation Network (RN) [46, 31]. ... LS = Ltask + λLsal, where λ 0 is a loss weighting parameter. |