Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Advancing Constrained Monotonic Neural Networks: Achieving Universal Approximation Beyond Bounded Activations
Authors: Davide Sartor, Alberto Sinigaglia, Gian Antonio Susto
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental evaluation reinforces the validity of the theoretical results, showing that our novel approach compares favourably to traditional monotonic architectures. In this section, we aim to analyze the method s performance compared to other alternatives that give monotonic guarantees. |
| Researcher Affiliation | Academia | 1Department of Information Engineering, University of Padova, Padova (PD), Italy 2Human Inspired Technology Research Centre, University of Padova, Padova (PD), Italy. Correspondence to: Davide Sartor <EMAIL>, Alberto Sinigaglia <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Forward pass of a Monotonic MLP with post-activation switch |
| Open Source Code | Yes | Code available at github.com/AMCO-UniPD/monotonic. |
| Open Datasets | Yes | The first dataset used is COMPAS (Fabris et al., 2022). COMPAS is a dataset comprised of 13 features... A second classification dataset considered is the Heart Disease dataset... Lastly, we also test our method on the Loan Defaulter dataset... To test on a regression task, we use the Auto MPG dataset... A second dataset for regression is the Blog Feedback dataset (Buza, 2013). |
| Dataset Splits | No | The paper does not explicitly provide information on the training/test/validation dataset splits, only mentioning 'Test metrics' and a 'Batch-size' in Table 2 without further details on data partitioning percentages or methods. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types) used for running its experiments. |
| Software Dependencies | No | The experiments were developed in Py Torch. The training was performed using the Adam optimizer implementation from the Py Torch Library. While PyTorch is mentioned, specific version numbers for PyTorch, Python, or other libraries are not provided. |
| Experiment Setup | Yes | Table 2. Hyper-parameters used for results reported in Table 1 Hyper-parameter COMPAS Blog Feedback Loan Defaulter Auto MPG Heart Disease Learning-rate 10 3 10 2 10 3 10 3 10 3 Epochs 100 1000 50 300 300 Batch-size 8 256 256 8 8 Free layers size 16 2 16 8 16 Number of free layers 3 2 3 3 3 Monotonic layers size 16 3 16 8 16 Number of monotonic layers 3 2 3 3 3 Activation Re LU CELU Re LU CELU Re LU |