Learning Relational Representations with Auto-encoding Logic Programs

Authors: Sebastijan Dumancic, Tias Guns, Wannes Meert, Hendrik Blockeel

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally show that these latent representations are indeed beneficial in relational learning tasks. The experiments aim at answering the following question: Q: Does learning from latent representations created by Alps improve the performance of an SRL model? The results (Figure 3) indicate that BUSL is able to learn better models from the latent representations.
Researcher Affiliation Academia Sebastijan Dumanˇci c1 , Tias Guns2 , Wannes Meert1 and Hendrik Blockeel1 1KU Leuven, Belgium 2VUB, Belgium {sebastijan.dumancic, wannes.meert, hendrik.blockeel}@cs.kuleuven.be, tias.guns@vub.be
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to its own source code for the methodology described. It mentions using a third-party solver (Oscar) with a link but not its own implementation.
Open Datasets Yes We use standard SRL benchmark datasets often used with MLN learners: Cora-ER, Web KB, UWCSE and IMDB. The descriptions of the datasets are available in [Mihalkova and Mooney, 2007; Kok and Domingos, 2010], while the datasets are available on the Alchemy website4. 4http://alchemy.cs.washington.edu/
Dataset Splits Yes We divide the data in training, validation and test sets respecting the originally provided splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions 'The COP is solved using the Oscar solver' but does not provide a specific version number for this or any other software dependency.
Experiment Setup Yes Alps hyper-parameters. When learning latent representations, we vary the length of the encoder and decoder clauses separately in {2, 3} and the compression level (the α parameter) in {0.3, 0.5, 0.7}.