Monotone-Value Neural Networks: Exploiting Preference Monotonicity in Combinatorial Assignment
Authors: Jakob Weissteiner, Jakob Heiss, Julien Siems, Sven Seuken
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our MVNNs experimentally in spectrum auction domains. Our results show that MVNNs improve the prediction performance, they yield stateof-the-art allocative efficiency in the auction, and they also reduce the run-time of the WDPs. In this section, we show that in all considered CA domains, MVNNs are significantly better at capturing bidders complex value functions than plain (Re LU) NNs, which allows them to extrapolate much better in the bundle space. |
| Researcher Affiliation | Academia | 1University of Zurich 2ETH Zurich 3ETH AI Center weissteiner@ifi.uzh.ch, jakob.heiss@math.ethz.ch, juliensiems@gmail.com, seuken@ifi.uzh.ch |
| Pseudocode | No | The paper describes algorithms and mathematical formulations (MILP), but does not contain any pseudocode or explicitly labeled algorithm blocks. |
| Open Source Code | Yes | Our code is available on Git Hub: https://github.com/marketdesignresearch/MVNN. |
| Open Datasets | Yes | In our experiments we use simulated data from the Spectrum Auction Test Suite (SATS) version 0.7.0 [Weiss et al., 2017]. |
| Dataset Splits | Yes | Details on how we collect the data and the train/val/test split can be found in Appendix D.1. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments. It only mentions 'MILP runtimes' without specifying the computational environment. |
| Software Dependencies | No | The paper does not list any specific software dependencies with version numbers (e.g., Python, PyTorch, or specific solver versions). |
| Experiment Setup | Yes | HPO To efficiently optimize the hyperparameters and fairly compare MVNNs and plain NNs for best generalization across different instances of each SATS domain, we frame the hyperparameter optimization (HPO) problem as an algorithm configuration problem and use the well-established sequential model-based algorithm configuration (SMAC) [Hutter et al., 2011]. ... Further details on the setting including hyperparameter ranges can be found in Appendix D.2. |