Bayesian Inference with Complex Knowledge Graph Evidence
Authors: Armin Toroghi, Scott Sanner
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally evaluate BIKG in incremental KGQA and interactive recommendation tasks demonstrating that it outperforms non-incremental methodologies and leads to better incorporation of conjunctive evidence vs. existing complex KGQA methods like CQD that leverage fuzzy T-norm operators. |
| Researcher Affiliation | Academia | Armin Toroghi1, Scott Sanner1,2 1Department of Mechanical and Industrial Engineering, University of Toronto 2Vector Institute of Artificial Intelligence, Toronto armin.toroghi@mail.utoronto.ca, ssanner@mie.utoronto.ca |
| Pseudocode | Yes | Algorithm 1: BIKG Algorithm |
| Open Source Code | Yes | 1https://github.com/atoroghi/BIKG |
| Open Datasets | Yes | We evaluate BIKG on three KGs: FB15k (Bordes et al. 2013), FB15k-237 (Toutanova and Chen 2015), and NELL995 (Xiong, Hoang, and Wang 2017). ... Movielens 20M (Harper and Konstan 2015) and LFM-1b (Schedl 2016). |
| Dataset Splits | No | The paper mentions 'training set of KG' and 'validation and test sets' when describing query extraction, but does not provide specific percentages or sample counts for train/validation/test splits of the KG data used for model training or evaluation. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions the use of 'Simpl E KGE method' but does not provide specific version numbers for any software dependencies like programming languages, libraries, or frameworks used for implementation. |
| Experiment Setup | No | The paper provides the training objective for SimplE KGE (Equation 1) which includes a regularization hyperparameter λ, but it does not specify concrete values for this hyperparameter or other experimental setup details such as learning rate, batch size, or optimizer settings. |