Counterexample-Guided Learning of Monotonic Neural Networks

Authors: Aishwarya Sivaraman, Golnoosh Farnadi, Todd Millstein, Guy Van den Broeck

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on real-world datasets demonstrate that our approach achieves stateof-the-art results compared to existing monotonic learners, and can improve the model quality compared to those that were trained without taking monotonicity constraints into account.
Researcher Affiliation Academia Aishwarya Sivaraman University of California, Los Angeles dcssiva@cs.ucla.edu Golnoosh Farnadi Mila/Université de Montréal farnadig@mila.quebec Todd Millstein University of California, Los Angeles todd@cs.ucla.edu Guy Van den Broeck University of California, Los Angeles guyvdb@cs.ucla.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes We have implemented these techniques in a tool called COMET.1 ... 1https://github.com/Aishwarya Sivaraman/COMET
Open Datasets Yes We use four datasets: Auto MPG and Boston Housing are regression datasets ... and are obtained from the UCI machine learning repository [6]; Heart Disease [19] and Adult [6] are classification datasets ... [6] Catherine L Blake and Christopher J Merz. Uci repository of machine learning databases, 1998, 1998. [19] John H. Gennari, Pat Langley, and Douglas H. Fisher. Models of incremental concept formation. Artif. Intell., 40(1-3):11 61, 1989.
Dataset Splits No We carry out our experiments on three random 80/20 splits and report average test results, except for the Adult dataset, for which we report on one random split. The paper specifies train/test splits but does not explicitly mention a validation split.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No Experiments were implemented in Python using the Keras deep learning library [9], we use the ADAM optimizer [29] to perform stochastic optimization of the neural network models, and we use the Optimathsat [39] solver for counterexample generation. No version numbers are provided for Keras or Optimathsat.
Experiment Setup Yes For each dataset, we identify the best baseline architecture and parameters by conducting grid search and learn the best Re LU neural network (NNb). ... In this experiment we re-train NNb with counterexamples for 40 epochs, model selection is based on train quality... We tune Adam stepsize, learning rate, number of epochs, and batch size on all methods.