How to address monotonicity for model risk management?
Authors: Dangxing Chen, Weicheng Ye
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | As a result of empirical examples, we demonstrate that monotonicity is often violated in practice and that monotonic groves of neural additive models are transparent, accountable, and fair. |
| Researcher Affiliation | Academia | 1Zu Chongzhi Center for Mathematics and Computational Sciences, Duke Kunshan University, Kunshan, Jiangsu, China. Correspondence to: Dangxing Chen <dangxing.chen@dukekunshan.edu.cn>. |
| Pseudocode | Yes | Algorithm 1 Monotonic Groves of Neural Additive Model |
| Open Source Code | Yes | The code is built and modified based on (Tshitoyan, 2023). |
| Open Datasets | Yes | A popularly used dataset is the Kaggle credit score dataset 1. ... A report published by Pro Publica in 2016 provided recidivism data for defendants in Broward County, Florida (Pro, 2016). ... This dataset (Ahmad et al., 2017; Chicco & Jurman, 2020) contains the medical records of 299 patients who had heart failure... |
| Dataset Splits | No | For all our experiments, the dataset is randomly partitioned into 75% training and 25% test sets. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments. |
| Software Dependencies | No | The paper mentions 'The code is built and modified based on (Tshitoyan, 2023).' but does not specify software versions (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For all our experiments, the dataset is randomly partitioned into 75% training and 25% test sets. All neural networks contain 1 hidden layer with 2 units, logistic activation, and no regulation. |