Bayesian Optimization-Based Combinatorial Assignment
Authors: Jakob Weissteiner, Jakob Heiss, Julien Siems, Sven Seuken
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We run computational experiments in several spectrum auction domains to evaluate BOCA s performance. Our results show that BOCA achieves higher allocative efficiency than state-of-the-art approaches. |
| Researcher Affiliation | Academia | 1University of Zurich 2ETH Zurich 3ETH AI Center |
| Pseudocode | No | The information is insufficient. The paper describes various algorithms and methods but does not include any explicitly labeled "Pseudocode" or "Algorithm" blocks, nor does it present structured code-like steps for a procedure. |
| Open Source Code | Yes | Our source code is publicly available on Git Hub via: https://github.com/marketdesignresearch/BOCA. |
| Open Datasets | Yes | To generate synthetic CA instances, we use the following three domains from the spectrum auction test suite (SATS) (Weiss, Lubin, and Seuken 2017): LSVM, SRVM, and MRVM (see Appendix G.1 for details). |
| Dataset Splits | No | The information is insufficient. The paper mentions training on 'Dtrain' and evaluating on a 'disjoint test set Dtest' but does not provide specific split percentages or counts for training, validation, and test sets to reproduce the data partitioning. |
| Hardware Specification | No | The information is insufficient. The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The information is insufficient. While the paper mentions using a MILP solver (e.g., implied for solving Equation 10 and Theorem 2), it does not specify any software dependencies with version numbers (e.g., 'CPLEX 12.4' or 'PyTorch 1.9'). |
| Experiment Setup | Yes | To enable a fair comparison against prior work, for each domain, we use Qinit = 40 initial random queries (including the full bundle for the calculation of M100%-u UB i ) and set the query budget to Qmax = 100... We use random search (RS) (Bergstra and Bengio 2012) to optimize the hyperparameters of the mean MVNN Mmean i and of our MVNN-based u UB Mu UB i . The HPO includes the NNarchitecture parameters, training parameters, NOMU parameters, and initialization parameters (see Section 3.2). |