Deep Learning—Powered Iterative Combinatorial Auctions

Authors: Jakob Weissteiner, Sven Seuken2284-2293

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally compare the prediction performance of DNNs against SVRs. Third, we present experimental evaluations in two medium-sized domains which show that even ICAs based on relatively small-sized DNNs lead to higher economic efficiency than ICAs based on kernelized SVRs. Finally, we show that our DNN-powered ICA also scales well to very large CA domains. 5 Experimental Evaluation
Researcher Affiliation Academia Jakob Weissteiner University of Zurich weissteiner@ifi.uzh.ch Sven Seuken University of Zurich seuken@ifi.uzh.ch
Pseudocode Yes Algorithm 1: ML-BASED ELICITATION (Brero et al. 2018) Parameter: Machine learning algorithm A ... Algorithm 2: PVM (Brero et al. 2018)
Open Source Code Yes We release our code under an open-source license at: https://github.com/marketdesignresearch/DL-ICA.
Open Datasets Yes Specifically, we use the spectrum auction test suite (SATS) version 0.6.4 (Weiss, Lubin, and Seuken 2017).
Dataset Splits No For each such instance, we sample, for each bidder type, a training set T of equal size and a disjoint test set V consisting of all remaining bundles, i.e., |V | := 2|M| - |T|. For each bidder type, we train the ML algorithm on T and test it on V . The paper mentions training and testing sets but does not explicitly state the use of a validation set or specific train/validation/test splits.
Hardware Specification Yes Experiments were conducted on machines with Intel Xeon E5-2650 v4 2.20GHz processors with 40 logical cores.
Software Dependencies Yes For fitting the DNNs in all of our experiments we use PYTHON 3.5.3, KERAS 2.2.4 and TENSORFLOW 1.13.1. For solving MIPs of the form (OP2) in our experiments we use CPLEX 12.8.0.0 with the python library DOCPLEX 2.4.61.
Experiment Setup Yes we follow Brero, Lubin, and Seuken (2018) and assume that bidders answer all value queries truthfully. Furthermore, we also use their experiment setup and define a cap ce on the total number of queries in Algorithm 1 and set ce := 50. The initial set of reported bundle-value pairs B0 i per bidder i is drawn uniformly at random and set to be equal across bidders. We denote the number of initial reports by c0 := |B0 i |, i N, resulting in a maximum of c0 + n (ce c0) queries per bidder. For the DNNs, we optimized the architecture, the L2-regularization parameter for the affine mappings, the dropout rate per layer, and the learning rate of ADAM. For SVRs with a quadratic kernel k(x, y) := x T y + γ(x T y)2, we optimized γ (i.e., the influence of the quadratic term), the regularization parameter C, and the loss function parameter ϵ.