Tractable Learning for Complex Probability Queries

Authors: Jessa Bekker, Jesse Davis, Arthur Choi, Adnan Darwiche, Guy Van den Broeck

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To resolve these questions, we run Learn SDD and ACMN on real-world data and compare their performance. (from Section 5 Empirical Evaluation)
Researcher Affiliation Academia Jessa Bekker, Jesse Davis KU Leuven, Belgium {jessa.bekker,jesse.davis}@cs.kuleuven.be Arthur Choi, Adnan Darwiche, Guy Van den Broeck University of California, Los Angeles {aychoi,darwiche,guyvdb}@cs.ucla.edu
Pseudocode Yes Algorithm 1 Learn SDD(T, e, α)
Open Source Code No The paper states: "Our Learn SDD implementation builds on the publicly available SDD package" (footnote 3: http://reasoning.cs.ucla.edu/sdd/), but does not explicitly provide a link or statement for the open-sourcing of the code for their specific Learn SDD methodology.
Open Datasets Yes We used the Traffic and Temperature data sets [5] to evaluate the benefit of detecting mutual exclusivity. ... we used voting data from Gov Trac.us and Pang and Lee s Movie Review data set.4 (http://www.cs.cornell.edu/people/pabo/movie-review-data/)
Dataset Splits Yes For all data sets, we divided the data into a single train, tune, and test partition. Table 2: Data Set Characteristics Data Set Train Set Size Tune Set Size Test Set Size Num. Vars. Traffic 3,311 441 662 128 Temperature 13,541 1,805 2,708 216 Voting 1,214 200 350 1,359 Movies 1,600 150 250 1000
Hardware Specification Yes All experiments were run on identically configured machines with 128GB RAM and twelve 2.4GHz cores.
Software Dependencies No The paper mentions using "the Scikit Learn Count Vectorizer" and "the Porter stemmer" but does not provide specific version numbers for these software components.
Experiment Setup Yes For Learn SDD, we tried setting α to 1.0, 0.1, 0.01 and 0.001. For ACMN, we did a grid search for the hyper-parameters (per-split penalty ps and the L1 and L2-norm weights l1 and l2) with ps {2, 5, 10}, l1 {0.1, 1, 5} and l2 {0.1, 0.5, 1}.