Tractable Explanations for d-DNNF Classifiers

Authors: Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Martin Cooper, Nicholas Asher, Joao Marques-Silva5719-5728

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This paper shows that for classifiers represented with some of the best-known propositional languages, different kinds of explanations can be computed in polynomial time. Furthermore, the paper describes optimizations, specific to Sentential Decision Diagrams (SDDs), which are shown to yield more efficient algorithms in practice. ... Section 5 assesses the computation of explanations of d-DNNF s and SDDs in practical settings.
Researcher Affiliation Academia 1 University of Toulouse, France 2 Monash University, Melbourne, Australia 3 IRIT, CNRS, Univ. Paul Sabatier, Toulouse, France 4 IRIT, CNRS, Toulouse, France
Pseudocode Yes Algorithm 1: Finding one AXp given starting seed S; Algorithm 2: Finding one CXp given starting seed S; Algorithm 3: Enumeration algorithm
Open Source Code Yes To compile SDDs, we use the Py SDD package4, which is implemented in Python and Cython. Py SDD wraps the well-known SDD package5 which offers canonical SDDs6. ... All the materials for replicating the experiments are available at https://github.com/Xuanxiang Huang/Xddnnf-experiments
Open Datasets Yes The experiments consider a selection of 19 binary classification datasets that are publicly available and originate from the Penn Machine Learning Benchmarks (Olson et al. 2017) and the UCI Machine Learning Repository (Dua and Graff 2017).
Dataset Splits No The paper does not explicitly provide details about a validation dataset split or the methodology for creating one. It mentions using a
Hardware Specification Yes Lastly, we run the experiments on a Mac Book Pro with a 6-Core Intel Core i7 2.6 GHz processor with 16 GByte RAM, running mac OS Big Sur.
Software Dependencies No The paper mentions using "Orange3", "Py SDD package", and "Py SAT toolkit" for its experiments, but it does not specify any version numbers for these software dependencies, nor for Python itself.
Experiment Setup No The paper describes the overall process (training RODT models and compiling them into d-DNNFs/SDDs) and mentions the datasets and tools used, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed system-level training settings for reproducing the experiments.