Automated Design of Affine Maximizer Mechanisms in Dynamic Settings

Authors: Michael Curry, Vinzenz Thoma, Darshan Chakrabarti, Stephen McAleer, Christian Kroer, Tuomas Sandholm, Niao He, Sven Seuken

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In experiments on several dynamic mechanism design settings, such as sequential auctions, task scheduling and navigating a gridworld, our approaches result in truthful mechanisms that outperform the VCG baseline.
Researcher Affiliation Collaboration Michael Curry*1, 2, 4, Vinzenz Thoma*3, 4, Darshan Chakrabarti5, Stephen Mc Aleer6, Christian Kroer5, Tuomas Sandholm6, 7, Niao He3, Sven Seuken2, 4 1Harvard University 2University of Zurich 3ETH Zurich 4ETH AI Center 5Columbia University 6Carnegie Mellon University, Computer Science Department 7Optimized Markets, Strategy Robot, Strategic Machine
Pseudocode Yes Algorithm 1: Gradient-based Dynamic Mechanism Design
Open Source Code No The paper does not include any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper describes generating reward distributions (e.g., "uniformly from [0, 1]", "uniformly distributed on [0, i]") for its experimental settings, but does not refer to or provide access information for any publicly available or open datasets in the traditional sense (e.g., via a URL, DOI, or specific repository name).
Dataset Splits No The paper mentions sampling type profiles for optimization steps (e.g., "2000 randomly sampled reward profiles" and "20 sampled type profiles"), but it does not specify explicit training/validation/test dataset splits with percentages, absolute counts, or citations to predefined splits for reproducibility.
Hardware Specification No The paper does not mention any specific hardware components (e.g., CPU, GPU models, or cloud computing instance types) used for running the experiments.
Software Dependencies Yes We solve the regularized program using MOSEK (Ap S 2023) and use the Diff Opt package within Ju MP (Lubin et al. 2023) to differentiate. Ap S, M. 2023. The MOSEK optimization toolbox, version 10.0. Lubin, M.; Dowson, O.; Garcia, J. D.; Huchette, J.; Legat, B.; and Vielma, J. P. 2023. Ju MP 1.0: Recent improvements to a modeling language for mathematical optimization. In press.
Experiment Setup Yes We estimate derivatives using 20 perturbations, sampled from a Gaussian distribution with standard deviation 0.05 per estimate on 20 sampled type profiles. We use a learning rate of 0.1. We compute derivatives with respect to social welfare using the regularized LP, with smoothing parameter 10 2 except where mentioned. For each stochastic gradient step, we sample 20 type profiles and optimize with learning rate 10 2. In all cases, when evaluating the objective, we sample 10000 type profiles and do not use regularization.