On Robust Trimming of Bayesian Network Classifiers
Authors: YooJung Choi, Guy Van den Broeck
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments investigate both the runtime cost of trimming and its effect on the robustness and accuracy of the final classifier. Finally, with evaluation on real-world data, we show that our approach finds robust trimmings and demonstrate the relationship between robustness and accuracy. |
| Researcher Affiliation | Academia | Yoo Jung Choi and Guy Van den Broeck Computer Science Department, University of California, Los Angeles {yjchoi, guyvdb}@cs.ucla.edu |
| Pseudocode | Yes | Algorithm 1 ECA-TRIM(I, E, b) and Algorithm 2 COMPUTE-MAA |
| Open Source Code | Yes | Code available at https://github.com/UCLA-Star AI/Trim BN. |
| Open Datasets | Yes | We evaluate our method on real-world datasets from the UCI repository [Bache and Lichman, 2013], BFC4, and CRESST5. |
| Dataset Splits | Yes | We randomly split each dataset into 80/20 train and test sets and learn a naive Bayes classifier using the training set. We compute the average classification accuracy of each feature subset using 10-fold cross validation on the training set. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were mentioned. |
| Software Dependencies | No | The paper mentions software components like SDD and E-SDP algorithm, but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | With the budget set as half the number of features and threshold as 0.5, we compute the ECA of each feasible feature subset. Each naive Bayes classifier was trimmed with the budget set to 1/3 the number of features, each feature given unit cost, and classification thresholds in {0.1, 0.2, . . . , 0.9}. |