Logical Message Passing Networks with One-hop Inference on Atomic Formulas

Authors: Zihao Wang, Yangqiu Song, Ginny Wong, Simon See

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 7 EXPERIMENTS In this section, we compare LMPNN with existing neural CQA methods and justify the important features of LMPNN with ablation studies. Our results show that LMPNN is a very strong method for answering complex queries.
Researcher Affiliation Collaboration Zihao Wang & Yangqiu Song CSE, HKUST Hong Kong SAR {zwanggc,yqsong}@cse.ust.hk Ginny Y. Wong & Simon See NVIDIA AI Technology Center (NVATIC), NVIDIA Santa Clara, USA {gwong,ssee}@nvidia.com
Pseudocode No The paper does not contain a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Our implementation can be found at https://github.com/HKUST-KnowComp/LMPNN.
Open Datasets Yes We compare the results on FB15k (Bordes et al., 2013), FB15k-237 (Toutanova et al., 2015), and NELL (Carlson et al., 2010).
Dataset Splits No The paper mentions using widely used training and evaluation datasets but does not explicitly provide the specific training, validation, or test split percentages or sample counts for these datasets.
Hardware Specification Yes All experiments of LMPNN are conducted on a single V100 GPU (16GB).
Software Dependencies No The paper mentions using 'Adam W' for training but does not provide specific version numbers for any software, libraries, or frameworks used in the experiments.
Experiment Setup Yes The learning rate is 1e-4, and the weight decay is 1e-4. The batch size is 1,024, and the negative sample size is 128, selected from {32, 128, 512}. The MLP network has one hidden layer whose dimension is 8,192 for NELL and FB15k, and 4,096 for FB15k-237. T in the training objective is chosen as 0.05 for FB15k-237 and FB15k and 0.1 for NELL. ϵ in Eq (9) is chosen to be 0.1.