Learning Relational Sum-Product Networks
Authors: Aniruddh Nath, Pedro Domingos
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the algorithm on three datasets; the RSPN learning algorithm outperforms Markov Logic Networks in both running time and predictive accuracy. |
| Researcher Affiliation | Academia | Aniruddh Nath and Pedro Domingos Department of Computer Science and Engineering University of Washington Seattle, WA 98195-2350, U.S.A. {nath, pedrod}@cs.washington.edu |
| Pseudocode | Yes | Algorithm 1 Learn RSPN(C, T, V ) |
| Open Source Code | No | The paper does not provide an explicit statement about releasing the source code for the methodology described (Learn RSPN) nor does it provide a link to a code repository for their implementation. |
| Open Datasets | Yes | The UW-CSE database (Richardson and Domingos 2006), We generated artificial social networks in the Friends-and-Smokers domain (Singla and Domingos 2008), The test corpus consists of four short Python programming assignments from MIT ed X introductory programming course (6.00x) (Singh, Gulwani, and Solar-Lezama 2013) |
| Dataset Splits | Yes | We performed leave-one-out testing by area, testing on each area in turn using the model trained from the remaining four. The systems were trained on three programs and tested on the fourth |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | Yes | For MLN inference, we used the MC-SAT algorithm, the default choice in ALCHEMY 2.0, with the default parameters. |
| Experiment Setup | Yes | To cluster instances in Learn RSPN, we used the EM implementation in SCIKIT-LEARN (Pedregosa et al. 2011), with two clusters. ... To discourage excessively finegrained decomposition during structure learning, we used a high threshold of 0.5 for the one-tailed p-value. For EDTs, we used the independent Bernoulli form, as described in example 1 in the main paper. All Bernoulli distributions were smoothed with a pseudocount of 0.1. For LSM, we used the example parameters in the implementation (Nwalks = 10, 000, π = 0.1; remaining parameters as specified by Kok and Domingos 2010). |