Induction of Interpretable Possibilistic Logic Theories from Relational Data

Authors: Ondrej Kuzelka, Jesse Davis, Steven Schockaert

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare our approach s learned models to learned MLNs for various MAP inference tasks. We use two standard SRL datasets: UWCSE and Yeast Proteins. We evaluate the performance of the learned models as follows.
Researcher Affiliation Academia Ondˇrej Kuˇzelka Cardiff University, UK Kuzelka O@cardiff.ac.uk Jesse Davis KU Leuven, Belgium jesse.davis@cs.kuleuven.be Steven Schockaert Cardiff University, UK Schockaert S1@cardiff.ac.uk
Pseudocode No The paper describes algorithmic procedures such as a beam search method and a greedy approach for weight learning, but it does not present these in a structured pseudocode or algorithm block format.
Open Source Code No The paper states: 'We also provide an online appendix to this paper2 with additional illustrating examples and experimental results. 2http://arxiv.org/abs/1705.07095'. This link points to an arXiv preprint, which typically hosts the paper itself or supplementary PDFs, not necessarily the source code for the described methodology. There is no explicit statement about releasing the code for their work.
Open Datasets Yes We use two standard SRL datasets: UWCSE and Yeast Proteins. The UWCSE dataset... [https://alchemy.cs.washington.edu/data/uw-cse/]... The Yeast-Proteins dataset...
Dataset Splits Yes The UWCSE dataset is split into five groups: AI, language, theory, graphics, and systems. We use AI, language and theory as a training set and graphics and systems as a test set. We randomly divide the constants (entities) in this dataset [Yeast-Proteins] into two disjoint sets of equal size. The training set consists of atoms containing only the constants from the first set and the test set contains only the constants from the second set.
Hardware Specification No The paper does not explicitly describe any specific hardware components (e.g., CPU, GPU models, memory, or cloud instances) used for running the experiments. It mentions 'speed-up' in inference but no underlying hardware.
Software Dependencies No The paper mentions several software components like 'Java', 'SAT4j library', 'Cryptominisat', 'JOptimizer package', 'Alchemy package', and 'Rock It'. However, it does not provide specific version numbers for these software dependencies, which are necessary for reproducible descriptions.
Experiment Setup Yes To find Horn rules, we employ a beam search method, which relies on two parameters: the size of the beam b and the maximum number of literals in the body of a rule l. For each clause, we check whether Υ |= α holds with a CSP solver. We are interested in hard rules that are universally quantified, constant-free clauses with no counterexamples in Υ. We find such clauses by exhaustively constructing all clauses (modulo isomorphism) containing at most t literals and at most k variables, where k is the width of the relational marginal distribution and t is a parameter of the method.