Differentiable Neuro-Symbolic Reasoning on Large-Scale Knowledge Graphs

Authors: CHEN SHENGYUAN, Yunfeng Cai, Huang Fang, Xiao Huang, Mingming Sun

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On benchmark datasets, we empirically show that Diff Logic surpasses baselines in both effectiveness and efficiency.In this section, we conduct experiments to answer the following research questions.
Researcher Affiliation Collaboration Shengyuan Chen Department of Computing The Hong Kong Polytechnic University Hung Hom, Hong Kong SAR shengyuan.chen@connect.polyu.hkYunfeng Cai Cognitive Computing Lab Baidu Research 10 Xibeiwang East Rd., Beijing, China caiyunfeng@baidu.com
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link indicating the availability of open-source code for the described methodology.
Open Datasets Yes We incorporate four real-world knowledge graph datasets: YAGO3-10, WN18, WN18RR, and Code X (available in three sizes: small, medium, and large), along with a synthetic logic reasoning dataset: Kinship. Dataset statistics and descriptions can be found in Appendix B.1.Table 6: Statistics of real-world knowledge base datasets.
Dataset Splits Yes Table 6: Statistics of real-world knowledge base datasets. Dataset #Ent #Rel #Train/Valid/Test #RulesThe optimum weight coefficient η is selected by using the validation set.
Hardware Specification Yes All the runtime experiments are conducted in the same machine with configurations as in Table 9.Table 9: Machine configuration. Component Specification GPU NVIDIA Ge Force RTX 3090 CPU Intel(R) Xeon(R) Silver 4214R CPU @ 2.40GHz
Software Dependencies No The paper mentions that models are 'implemented in Python' but does not provide specific version numbers for Python itself or any other relevant software libraries or dependencies.
Experiment Setup No The paper mentions that 'Hyperparameters for each baseline are taken from their original paper' and 'The optimum weight coefficient η is selected by using the validation set', but it does not explicitly provide concrete hyperparameter values or detailed training configurations for its own model (Diff Logic) in the main text.