KnowGPT: Knowledge Graph based Prompting for Large Language Models
Authors: Qinggang Zhang, Junnan Dong, Hao Chen, Daochen Zha, Zailiang Yu, Xiao Huang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on three benchmark datasets demonstrate that Know GPT significantly outperforms all competitors including the state-of-the-art Graph RAG models. |
| Researcher Affiliation | Academia | The Hong Kong Polytechnic University, Rice University, Zhejiang Lab |
| Pseudocode | No | The paper describes its methods in prose and uses mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | We have provided the code for the framework, accessible via this anonymous URL: https://anonymous.4open.science/status/Know GPT-DD64. |
| Open Datasets | Yes | Datasets. We evaluate Know GPT on three QA datasets spanning two fields: Commonsense QA [66] and Open Book QA [52] serve as benchmarks for commonsense reasoning, while Med QA-USMLE [34] acts as a domain-specific QA benchmark. |
| Dataset Splits | Yes | The statistics of these three datasets can be found in Table 5 in the Appendix. Table 5: The statistical information of three datasets. Dataset Question Choices Train Dev Test |
| Hardware Specification | Yes | All models are implemented in Pytorch and trained on an RTX 3090 with 24 RAM. |
| Software Dependencies | No | The paper mentions 'All models are implemented in Pytorch' but does not specify a version number for Pytorch or any other software dependencies with their versions. |
| Experiment Setup | No | The paper states it uses policy gradient and gradient clipping, and mentions using a random seed for runs, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) for its models or for training. |