Cardinality-Minimal Explanations for Monotonic Neural Networks

Authors: Ouns El Harzli, Bernardo Cuenca Grau, Ian Horrocks

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments suggest favourable performance of our algorithms.We conducted experiments on two partially monotonic datasets commonly used as benchmarks for designing monotonic and partially-monotonic models [Liu et al., 2020]: Blog Feedback Regression [Buza, 2014], a regression dataset with 276 features and Loan Defaulter1, a classification dataset with 28 features.
Researcher Affiliation Academia Ouns El Harzli , Bernardo Cuenca Grau , Ian Horrocks Department of Computer Science, University of Oxford {ouns.elharzli, bernardo.cuenca.grau, ian.horrocks}@cs.ox.ac.uk
Pseudocode Yes Algorithm 1 Computing contrastive explanations.
Open Source Code No To the best of our knowledge, our implementation is the only one available for computing cardinality-minimal explanations and hence we could not find a suitable benchmark for comparison.
Open Datasets Yes We conducted experiments on two partially monotonic datasets commonly used as benchmarks for designing monotonic and partially-monotonic models [Liu et al., 2020]: Blog Feedback Regression [Buza, 2014], a regression dataset with 276 features and Loan Defaulter1, a classification dataset with 28 features.1https://www.kaggle.com/datasets/wordsforthewise/lending-
Dataset Splits No We trained monotonic FCN models... We were able to reach a root mean-squared error (RMSE) of 0.175 on the test set for the Blog Feedback regression... and reached an accuracy of 60% on Loan Defaulter... (No mention of validation split or specific train/test percentages/counts.)
Hardware Specification No All experiments were conducted using Google Colab with GPU. (This does not specify the model of the GPU or any CPU details.)
Software Dependencies No We trained monotonic FCN models on both datasets with Py Torch [Paszke et al., 2019] using the mean-squared error loss for the Blog Feedback dataset and the binary cross entropy loss for the Loan Defaulter dataset. We trained the models with Adam [Kingma and Ba, 2014] for 10 epochs, setting all negative weights to 0 after each iteration of Adam to ensure monotonicity. (No specific version numbers for PyTorch, Adam, or any other software dependencies are provided.)
Experiment Setup Yes We trained monotonic FCN models on both datasets with Py Torch [Paszke et al., 2019] using the mean-squared error loss for the Blog Feedback dataset and the binary cross entropy loss for the Loan Defaulter dataset. We trained the models with Adam [Kingma and Ba, 2014] for 10 epochs, setting all negative weights to 0 after each iteration of Adam to ensure monotonicity.