Knowledge-Based Probabilistic Logic Learning

Authors: Phillip Odom, Tushar Khot, Reid Porter, Sriraam Natarajan

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present empirical analysis to study different questions: 1) how effective is advice for relational classification, 2) can we employ advice in sequential problems (specifically, imitation learning), 3) does our method leverage knowledge across domains (transfer learning), and 4) how does the advice help in standard domains for learning MLNs?. To compare the approaches, we use test-set accuracy averaged over multiple folds. We evaluate the results across three data sets.
Researcher Affiliation Academia Phillip Odom Dept. of Computer Science Indiana University Bloomington, IN Tushar Khot Dept. of Computer Science University of Wisconsin Madison, WI Reid Porter Intelligence and Space Research Division Los Alamos National Laboratory Los Alamos, NM Sriraam Natarajan School of Informatics and Computing Indiana University Bloomington, IN
Pseudocode Yes Algorithm 1 ARFGB: Advice for Relational Function Gradient Boosting
Open Source Code No The paper does not contain any explicit statement about providing open-source code or a link to a code repository for the described methodology.
Open Datasets Yes We use the Drosophila dataset (Cardona et al. 2010)
Dataset Splits Yes We divide the dataset into 4 train and test sets, where each training set consists of 3 layers, while each test set consist of 2 layers that are 10 layers away from the training set.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, memory) used for running the experiments.
Software Dependencies No The paper discusses various models and frameworks (e.g., RFGB, MLNs, RDNs) but does not provide specific version numbers for any software dependencies or libraries used for implementation.
Experiment Setup Yes All the boosting approaches learn five trees in this domain.