A Probabilistic Approach to Knowledge Translation

Authors: Shangpu Jiang, Daniel Lowd, Dejing Dou

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In experiments on both propositional and relational domains, we find that the knowledge obtained by KT is comparable to other approaches that require data, demonstrating that knowledge can be reused without data. We implement an experimental KT system and evaluate it on two real datasets. We run experiments on synthetic and real datasets.
Researcher Affiliation Academia Shangpu Jiang, Daniel Lowd, Dejing Dou Computer and Information Science University of Oregon, USA {shangpu,lowd,dou}@cs.uoregon.edu
Pseudocode Yes We present the pseudocode of our heuristic structure translation in Algorithm 1. Algorithm 1 Structure Translation (MRFs or MLNs)
Open Source Code No The paper mentions using 'Libra Toolkit' and 'Alchemy' but does not state that the authors are releasing the source code for their own knowledge translation system or its components.
Open Datasets Yes We use the UW-CSE dataset2 and the UO dataset which we collected from the Department of Computer Science of the University of Oregon. The UW-CSE dataset was introduced by Richardson and Domingos (Richardson and Domingos 2006) and is widely used in statistical relational learning research. 2http://alchemy.cs.washington.edu/data/uw-cse/
Dataset Splits Yes We first left out 1/5 of the data instances in the source and target dataset as the testing sets. We used standard 4-fold cross-validation to determine the parameters of the learning algorithm. The parameters include κ, prior, and mincount for decision tree learning, and l2 for weight learning.
Hardware Specification No The paper does not provide any specific hardware details such as CPU models, GPU models, or cloud computing instance types used for running the experiments.
Software Dependencies No The paper mentions using 'Libra Toolkit' and 'Alchemy' but does not specify their version numbers or other software dependencies with version numbers required for replication.
Experiment Setup Yes The parameters include κ, prior, and mincount for decision tree learning, and l2 for weight learning. For structure translation with TS-KS, we only translate features for which the absolute value of the weight is greater than a threshold θ. These two parameters are tuned with cross-validation over a partition of the samples. We set the number of constants of each type to be the average number over all training databases, multiplied by a scalar 1 2 for more efficient inference. We set N to 1, 2 and 5 in our experiments. We set the l2 prior for weight learning to 10, based on cross-validation over samples.