Temporal Inductive Logic Reasoning over Hypergraphs

Authors: Yuan Yang, Siheng Xiong, Ali Payani, James C. Kerce, Faramarz Fekri

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on these benchmarks demonstrate that TILR achieves superior reasoning capability over various strong baselines. We create and release two temporal hypergraph datasets You Cook2-HG and nu Scenes-HG here, which is created from You Cook2 cooking recipe dataset and nu Scenes autonomous driving dataset [Caesar et al., 2020]. Experiments on these benchmarks demonstrate that TILR achieves superior reasoning capability over various strong baselines.
Researcher Affiliation Collaboration Yuan Yang1 , Siheng Xiong1 , Ali Payani2 , James C. Kerce1 and Faramarz Fekri1 1Georgia Institute of Technology 2Cisco {yyang754@, sxiong45@, clayton.kerce@gtri., faramarz.fekri@ece.}gatech.edu, apayani@cisco.com
Pseudocode Yes Algorithm 1: Multi-start Random B-walk and Algorithm 2: Path-consistency for temporal relation generalization.
Open Source Code No The paper states: "We release two novel temporal hypergraph datasets You Cook2-HG and nu Scenes-HG here", referring to datasets, but no explicit statement or link is provided for the open-source code of the methodology itself.
Open Datasets Yes We create and release two temporal hypergraph datasets You Cook2-HG and nu Scenes-HG here, which is created from You Cook2 cooking recipe dataset and nu Scenes autonomous driving dataset [Caesar et al., 2020].
Dataset Splits No The paper mentions a "training split" and evaluates performance, but it does not provide specific percentages, counts, or a detailed methodology for how the datasets were split into training, validation, and test sets for reproducibility.
Hardware Specification Yes All experiments are done on a PC with i7-8700K and one GTX1080ti.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) used for the experiments.
Experiment Setup No The paper describes the different modes of TILR and the loss function used for training, but it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings.