Machine Learning-Powered Combinatorial Clock Auction

Authors: Ermis Nikiforos Soumalias, Jakob Weissteiner, Jakob Heiss, Sven Seuken

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally evaluate our ML-based demand query mechanism in several spectrum auction domains and compare it against the most established real-world ICA: the combinatorial clock auction (CCA). Our mechanism significantly outperforms the CCA in terms of efficiency in all domains, it achieves higher efficiency in a significantly reduced number of rounds, and, using linear prices, it exhibits vastly higher clearing potential.
Researcher Affiliation Academia 1University of Zurich 2ETH Zurich 3ETH AI Center
Pseudocode Yes Algorithm 1: TRAINONDQS; Algorithm 2: ML-CCA
Open Source Code Yes Our source code is publicly available on Git Hub at https://github.com/marketdesignresearch/ML-CCA.
Open Datasets Yes To generate synthetic CA instances, we use the GSVM, LSVM, SRVM, and MRVM domains from the spectrum auction test suite (SATS) (Weiss, Lubin, and Seuken 2017) (see Appendix D.1 for details).
Dataset Splits Yes Additionally, we mark the bundle x CCA X from this last CCA iteration (i.e., the one resulting from p50) with a black star. Moreover, we present two different validation sets on which we evaluate m MVNN configurations in our hyperparameter optimization (HPO): Validation set 1 (red circles), which are 50, 000 uniformly at random sampled bundles x X, and validation set 2 (green circles), where we first sample 500 price vectors {pr}500 r=1 where the price of each item is drawn uniformly at random from the range of 0 to 3 times the average maximum value of an agent of that type for a single item, and then determine utility-maximizing bundles x i (pr) (w.r.t. vi) at those prices (cp. Equation (1)).
Hardware Specification Yes All experiments were performed on a cluster equipped with AMD EPYC 7742 (2.25 GHz) CPUs (8 cores, 16 threads, 64 MB L3 cache), NVIDIA A100 GPUs (40 GB RAM) with CUDA 11.2, and 256 GB RAM.
Software Dependencies Yes Our implementation does not use GPUs for training or inference, however, since Algorithm 1 requires solving a mixed-integer program (MIP) in each iteration, which we do via Gurobi 9.5.2.
Experiment Setup Yes For both mechanisms, we allow a maximum of 100 clock rounds per instance, i.e., we set Qmax = 100. For CCA, we set the price increment to 5%... In GSVM, LSVM and SRVM we set Qinit = 20 for ML-CCA, while in MRVM we set Qinit = 50.