A Comparative Study of Distributional and Symbolic Paradigms for Relational Learning
Authors: Sebastijan Dumancic, Alberto Garcia-Duran, Mathias Niepert
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this work, we compare distributional and symbolic relational learning approaches on various standard relational classification and knowledge base completion tasks. Furthermore, we analyse the complexity of the rules used implicitly by these approaches and relate them to the performance of the methods in the comparison. The results reveal possible indicators that could help in choosing one approach over the other for particular knowledge graphs. |
| Researcher Affiliation | Collaboration | Sebastijan Dumanˇci c1 , Alberto Garc ıa-Dur an2 and Mathias Niepert3 1KU Leuven, Belgium 2EPFL, Switzerland 3NEC Labs, Germany |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. The supplementary material link provided is for the paper itself, not a code repository. |
| Open Datasets | Yes | From the symbolic community, we focus on the following datasets: UWCSE, Mutagenesis, Carcinogenesis, Yeast, Web KB, Terrorists and Hepatitis. The descriptions of datasets can be found in [Dumanˇci c and Blockeel, 2017]. From the distributional community, we focus on the FB15k-237 and WN18-RR which are accepted as standard. The description of the datasets can be found in [Dettmers et al., 2018]. |
| Dataset Splits | Yes | We perform standard nested cross-validation (respecting the provided splits) and report the relative performance of the methods in terms of differences in accuracy, accdistributional accsymbolic averaged over individual splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions various software components and packages (e.g., Problog, TILDE, network X), but it does not specify their version numbers. |
| Experiment Setup | Yes | The dimensions of the embeddings were varied in {10, 20, 30, 50, 80, 100}; we include smaller dimension because standard relational datasets tend to have a much smaller number of entities than the KBC datasets. All embeddings were trained to 100 epochs and saved in steps of 20. |