Probabilistic Sufficient Explanations
Authors: Eric Wang, Pasha Khosravi, Guy Van den Broeck
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments demonstrate the effectiveness of our algorithm in finding sufficient explanations, and showcase its advantages compared to Anchors and logical explanations. |
| Researcher Affiliation | Academia | University of California, Los Angeles ericzxwang@ucla.edu, {pashak,guyvdb}@cs.ucla.edu |
| Pseudocode | No | The paper describes the 'beam search algorithm' in a paragraph format under 'Finding Probabilistic Sufficient Explanations', but it does not provide a formal pseudocode block or algorithm figure. |
| Open Source Code | Yes | Code at github.com/UCLA-Star AI/Sufficient Explanations |
| Open Datasets | Yes | We use the adult and MNIST datasets [Kohavi, 1996; Yann et al., 2009] for our experiments. |
| Dataset Splits | No | The paper mentions 'test images' and 'test examples' but does not provide specific details on training, validation, or test splits (e.g., percentages or counts), nor does it explicitly mention a validation set. |
| Hardware Specification | No | The paper states 'Our algorithm with the same beam size and cardinality constraint k = 30 took 347s using 16 threads', but it does not provide specific hardware details such as CPU/GPU models or memory. It defers further details to the appendix: 'For more detailed information on the datasets, preprocessing steps, learned models, and computing infrastructure, please refer to the appendix.' |
| Software Dependencies | No | The paper mentions using the 'open source Juice library [Dang et al., 2021]' and 'XGBoost [Chen and Guestrin, 2016]', but it does not specify version numbers for these software components in the main text. |
| Experiment Setup | No | The paper mentions algorithm-specific parameters like 'beam size' and 'cardinality constraint k = 30', and parameters for the comparative method Anchors (e.g., 'SDP (precision) threshold of 0.95'), but it does not provide general experimental setup details such as learning rates, optimizers, or epochs for model training. It also refers to the appendix for 'preprocessing steps, learned models, and computing infrastructure'. |