Modeling Knowledge Graphs with Composite Reasoning
Authors: Wanyun Cui, Linqiu Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluations demonstrate that mitigating the composition risk not only enhances the performance of TF-based models across all tested settings, but also surpass or is competitive with the state-of-the-art performance on two out of four benchmarks. |
| Researcher Affiliation | Academia | Wanyun Cui, Linqiu Zhang Shanghai University of Finance and Economics cui.wanyun@sufe.edu.cn, zhang.linqiu@stu.sufe.edu.cn |
| Pseudocode | No | No pseudocode or clearly labeled algorithm block was found in the paper. |
| Open Source Code | Yes | Our code, data and supplementary material are available at https://github.com/zlq147/Compil E |
| Open Datasets | Yes | We use four datasets of different scales, including two larger datasets (FB15k-237 and WN18RR), and two smaller datastes (UMLS and Kinship). Our code, data and supplementary material are available at https://github.com/zlq147/Compil E |
| Dataset Splits | No | No specific dataset split information (percentages or counts) for a validation set was provided. The paper mentions evaluating on test sets but does not detail how validation sets were used or their specific splits. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) were mentioned in the paper. |
| Experiment Setup | No | No specific experimental setup details, such as hyperparameter values (e.g., learning rate, batch size, epochs) or optimizer settings, were provided in the main text of the paper. |