Certified Monotonic Neural Networks

Authors: Xingchao Liu, Xing Han, Na Zhang, Qiang Liu

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical studies on various datasets demonstrate the efficiency of our approach over the state-of-the-art methods, such as Deep Lattice Networks [34].
Researcher Affiliation Academia Xingchao Liu Department of Computer Science University of Texas at Austin Austin, TX 78712 xcliu@utexas.edu Xing Han Department of Electrical and Computer Engineering University of Texas at Austin Austin, TX 78712 aaronhan223@utexas.edu Na Zhang Tsinghua University zhangna@pbcsf.tsinghua.edu.cn Qiang Liu Department of Computer Science University of Texas at Austin Austin, TX 78712 lqiang@cs.utexas.edu
Pseudocode Yes See in Algorithm 1 in Appendix for the detailed procedure.
Open Source Code Yes The code is publicly available3. https://github.com/gnobitab/Certified Monotonic Network
Open Datasets Yes Experiments are performed on 4 datasets: COMPAS [16], Blog Feedback Regression [4], Loan Defaulter1, Chest X-ray2. 1https://www.kaggle.com/wendykan/lending-club-loan-data 2https://www.kaggle.com/nih-chest-xrays/sample
Dataset Splits Yes For each dataset, we pick 20% of the training data as the validation set. 20% of the training data is used as the validation set.
Hardware Specification Yes Our computer has 48 cores and 192GB memory.
Software Dependencies Yes For solving the MILP problems, we adopt Gurobi v9.0.1 [14], which is an efficient commercial solver. Our method is implemented with Py Torch [24].
Experiment Setup Yes We use crossentropy loss for classification problems, and mean-squareerror for regression problems. Adam [18] optimizer is used for optimization. We initialize the coefficient of monotonicity regularization λ = 1, and multiply λ by 10 every time λ needs amplification. The default learning rate is 5e 3.