Learning Continuous-Time Bayesian Networks in Relational Domains: A Non-Parametric Approach

Authors: Shuo Yang, Tushar Khot, Kristian Kersting, Sriraam Natarajan

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results demonstrate that RCTBNs can learn as effectively as stateof-the-art approaches for propositional tasks while modeling relational tasks faithfully.
Researcher Affiliation Collaboration Shuo Yang School of Informatics and Computing Indiana University Bloomington, IN 47408 Tushar Khot Allen Institute for AI Seattle, WA 98103 Kristian Kersting Department of Computer Science TU Dortmund University Germany Sriraam Natarajan School of Informatics and Computing Indiana University Bloomington, IN 47408
Pseudocode Yes Algorithm 1 RCTBN-RFGB: RFGB for RCTBNs; Algorithm 2 Example generation for RCTBNs
Open Source Code No The paper does not provide an explicit statement or link for the open-sourcing of their own code. It only thanks "Jeremy Weiss for the mf CTBN code and inputs", referring to third-party code.
Open Datasets Yes More precisely, we employed three standard CTBN datasets from prior literature, i.e. the Drug model (Nodelman, Shelton, and Koller 2003), Multi Health model (Weiss, Natarajan, and Page 2012) and S100 model (Weiss, Natarajan, and Page 2012).
Dataset Splits Yes We ran 5-fold cross validation to generate the learning curve of the loglikelihood and AUC-ROC on the test set.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using "standard off-the-shelf relational regression tree learning approaches (Blockeel and Raedt 1998)" and refers to "mf CTBN code" from a collaborator, but it does not specify any software names with version numbers.
Experiment Setup No The paper states that experiments were run and validated, but it does not provide specific details on hyperparameters, training configurations, or other system-level settings used in the experimental setup.