Convergence Analysis of No-Regret Bidding Algorithms in Repeated Auctions

Authors: Zhe Feng, Guru Guruganesh, Christopher Liaw, Aranyak Mehta, Abhishek Sethi5399-5406

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments corroborate our theoretical findings and also find a similar convergence when we use other strategies such as Deep Q-Learning. We complement these results by simulating the above model with experiments. In particular, we show that these algorithms converge and produce truthful (or the canonical) equilibria. Furthermore, we show that the algorithm converges much quicker than the theory would predict.
Researcher Affiliation Collaboration Zhe Feng,1 Guru Guruganesh, 2 Christopher Liaw, 3 Aranyak Mehta, 2 Abhishek Sethi 2 1 Harvard University 2 Google Research 3 University of British Columbia
Pseudocode Yes Algorithm 1 Mean-based (Contextual) Learning Algorithm of Bidder i
Open Source Code No No explicit statement about providing open-source code for the described methodology or a link to a code repository was found.
Open Datasets No The paper describes generating data by sampling independent uniform distributions but does not refer to any specific publicly available dataset or provide access information for their generated data.
Dataset Splits No The paper describes simulated environments but does not provide specific training/validation/test dataset split information.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were mentioned in the provided text.
Software Dependencies No No specific ancillary software details with version numbers (e.g., library names like PyTorch 1.9, or specific solvers like CPLEX 12.4) were mentioned.
Experiment Setup Yes The details of Deep Q-Learning model and the set of hyperparameters used to train the two Q models are outlined in Appendix C.