Learning Performance Maximizing Ensembles with Explainability Guarantees
Authors: Vincent Pisztora, Jia Li
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we describe the data, model training procedures, performance evaluation metrics, and results of our experiments. |
| Researcher Affiliation | Academia | Vincent Pisztora, Jia Li Department of Statistics, Pennsylvania State University, USA |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | Following the tabular data benchmarking framework proposed by (Grinsztajn, Oyallon, and Varoquaux 2022), we conduct experiments on a set of 31 datasets (13 classification, 18 regression). |
| Dataset Splits | Yes | Each dataset is split (70%, 9%, 21%) into training, validation, and test sets respectively, following (Grinsztajn, Oyallon, and Varoquaux 2022). |
| Hardware Specification | No | Computations for this research were performed on the Pennsylvania State University s Institute for Computational and Data Sciences Roar supercomputer. This does not provide specific hardware details like GPU/CPU models. |
| Software Dependencies | No | The paper mentions types of models used (e.g., 'logistic regression', 'neural network'), but does not provide specific ancillary software details with version numbers. |
| Experiment Setup | Yes | Hyperparameter tuning for all models is done using 4-fold cross-validation, with the exception of the neural network tuning which is done using the validation set. A grid search is done to select the best hyperparameters for each model with search values available in the Appendix of the long form paper available on arxiv.org. |