Constrained Monotonic Neural Networks

Authors: Davor Runje, Sharath M Shankaranarayana

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show this approach of building monotonic neural networks has better accuracy when compared to other state-of-the-art methods, while being the simplest one in the sense of having the least number of parameters, and not requiring any modifications to the learning procedure or post-learning steps.
Researcher Affiliation Collaboration 1Airt Research, Zagreb, Croatia 2Algebra University College, Zagreb, Croatia.
Pseudocode No The paper describes the proposed method using mathematical definitions and equations (e.g., Definition 1, Definition 3), but it does not include a discrete pseudocode block or algorithm section.
Open Source Code Yes The code is publicly available at (Runje & Shankaranarayana, 2023a), while the preprocessed datasets for experiments are available at (Runje & Shankaranarayana, 2023b).
Open Datasets Yes For the first set of experiments, we use the datasets employed by authors in (Liu et al., 2020) and use the exact train and test split for proper comparison. We perform experiments on 3 datasets: COMPAS (J. Angwin & Kirchner, 2016)... Blog Feedback Regression (Buza, 2014)... Loan Defaulter1... For the second set of experiments, we use 2 datasets: Auto MPG (which is a regression dataset with 3 monotonic features) and Heart Disease... the preprocessed datasets for experiments are available at (Runje & Shankaranarayana, 2023b).
Dataset Splits No For the first set of experiments, we use the datasets employed by authors in (Liu et al., 2020) and use the exact train and test split for proper comparison. The train-test splits of 80% 20% are used for all comparison experiments.
Hardware Specification Yes All experiments were performed using a Google Colaboratory instance with NVidia Tesla T4 GPU (Bisong, 2019).
Software Dependencies Yes The code for experiments was written in the Keras framework (Chollet et al., 2015) and Keras Tuner (O Malley et al., 2019) via integration from the Tensorflow framework, version 2.11 (Abadi et al., 2015).
Experiment Setup No We employ Bayesian optimization tuning with Gaussian process (Snoek et al., 2012) to find the optimal hyperparameters such as the number of neurons, network depth or layers, initial learning rate etc.