Provably efficient, succinct, and precise explanations
Authors: Guy Blanc, Jane Lange, Li-Yang Tan
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We design an efficient algorithm with provable guarantees on the succinctness and precision of the explanations that it returns. Prior algorithms were either efficient but lacked such guarantees, or achieved such guarantees but were inefficient. Our algorithm also circumvents several hardness results for finding succinct and precise certificates, which we also discuss next. |
| Researcher Affiliation | Academia | Guy Blanc Stanford University gblanc@stanford.edu Jane Lange Massachusetts Institute of Technology jlange@mit.edu Li-Yang Tan Stanford University liyang@cs.stanford.edu |
| Pseudocode | Yes | Figure 1: How an algorithm for implicitly learning decision trees can be used to design a certificate-finding algorithm. FINDCERTIFICATE(f, T, x, ε, δ): Given: Query access to f : { 1}d { 1}, an algorithm for implicitly learning f with T as its decision tree hypothesis, instance x { 1}d, precision parameter ε, and confidence parameter δ. Output: An ε-error certificate for x of size at most the depth of T, or if no certificate is found. 1. Initialize α . 2. Initialize C . 3. While not ISLEAF(T, α): (a) Set i QUERY(T, α). (b) Add i to C. (c) Set α α {i = xi}. 4. Using queries to f, check whether the following holds with confidence at least 1 δ, indicating if C is an ε-error certificate for x: Pr y { 1}d f(y) = f(x) | y C = x C ε. If so, output C. Otherwise, output . |
| Open Source Code | No | The paper states 'Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A]', indicating no code is provided. The paper focuses on theoretical contributions and does not mention releasing any source code for its algorithms. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments with datasets. It mentions theoretical distributions like 'uniform random instance x { 1}d' but not specific, publicly available datasets for training. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments, so it does not provide details about training, validation, or test dataset splits. |
| Hardware Specification | No | The paper states 'If you ran experiments... [N/A]' for all experiment-related questions, and does not provide any details about hardware specifications. |
| Software Dependencies | No | The paper is theoretical and does not describe experimental implementations, thus it does not specify any software dependencies with version numbers. |
| Experiment Setup | No | The paper states 'Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [N/A]', confirming no experimental setup details are provided as the paper is theoretical. |