An Improved Algorithm for Learning to Perform Exception-Tolerant Abduction
Authors: Mengxue Zhang, Tushar Mathew, Brendan Juba
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also examine the empirical advantage of this new algorithm over the previous algorithm in two test domains, one of explaining conditions generated by a noisy k-DNF rule, and another of explaining conditions that are actually generated by a linear threshold rule. |
| Researcher Affiliation | Academia | Mengxue Zhang, Tushar Mathew, and Brendan Juba Washington University in St. Louis 1 Brookings Drive St. Louis, MO, 63130 USA mengxuezhang@wustl.edu, tusharmathew@gmail.com, bjuba@wustl.edu |
| Pseudocode | Yes | Algorithm 1 Partial Greedy Algorithm; Algorithm 2 Greedy partial RB; Algorithm 3 Low Deg Partial(X); Algorithm 4 Low Deg Partial 2 |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | No | The paper describes generating its own data: "In the first domain, there is a planted k-DNF rule that is used to define the condition, subject to some independent random noise." and "In the second domain, the condition is actually defined by a (random) linear threshold rule." |
| Dataset Splits | No | The paper mentions training sets ("generated 50,000 examples for each formula, a typical size training set") and test sets ("drew another data set... to estimate the error... on this new data set"; "generated 7,500 additional uniform random examples... to serve as a test set"), but does not specify a validation set or explicit training/validation/test splits with percentages or counts. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers. |
| Experiment Setup | Yes | For each example x, we independently chose whether to put c(x) = ϕ(x) with 95% probability, or to put c(x) = ϕ(x) with 5% probability. That is, there is a noise rate of 5%.; We supplied Tolerant Elimination the actual noise parameter of 5% and we supplied the Low-Degree algorithm with the actual fraction, 14.25%, of the data that we expect the planted k DNF to explain.; giving the low-degree and naive greedy (baseline) algorithms μ = 10%, 30%, 50%, 70%, 90% and 100%, and giving tolerant elimination a variety of different target error rates; only ϵ = 16% for 2-DNF had any nontrivial effect. |