Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Learning Possibilistic Logic Theories from Default Rules
Authors: Ondřej Kuželka, Jesse Davis, Steven Schockaert
IJCAI 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present experimental results that demonstrate the effectiveness of this approach. We evaluate the performance of this algorithm in two crowdsourcing experiments. In addition, we show how it can be used for approximating maximum a posteriori (MAP) inference in propositional Markov logic networks. An online appendix to this paper with additional details is available.1 |
| Researcher Affiliation | Academia | Ondˇrej Kuˇzelka Cardiff University, UK Kuzelka EMAIL Jesse Davis KU Leuven, Belgium EMAIL Steven Schockaert Cardiff University, UK Schockaert EMAIL |
| Pseudocode | No | The paper describes the heuristic learning algorithm in Section 4.3 using prose, but it does not include a formal pseudocode block, algorithm box, or flow chart. |
| Open Source Code | Yes | The data, code, and learned models are available from https: //github.com/supertweety/. |
| Open Datasets | Yes | We used Crowd Flower, an online crowdsourcing platform, to collect expert rules about two domains. We considered propositional MLNs learned from NLTCS, MSNBC, Plants and DNA data using the method from [Lowd and Davis, 2014]. These are standard datasets |
| Dataset Splits | Yes | To create training and testing sets, we divided the data based on annotator ID so that all rules labeled by a given annotated appear only in the training set or only in the testing set, to prevent leakage of information. We used the existing train/tune/test division of the data. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper states: "Our learning algorithm is implemented in Java and uses the SAT4j library [Berre and Parrain, 2010]". While a library is named, a specific version number for SAT4j is not provided in the text. |
| Experiment Setup | Yes | We run it for a maximum time of 10 hours for the crowdsourcing experiments reported in Section 5.2 and for one hour for the experiments reported in Section 5.3. For C4.5 and RIPPER, we use the default settings. For random forests, we used the default settings and set the number of trees to 100. |