The Power of Regularization in Solving Extensive-Form Games

Authors: Mingyang Liu, Asuman E. Ozdaglar, Tiancheng Yu, Kaiqing Zhang

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also provide numerical results to corroborate the advantages of our algorithms.
Researcher Affiliation Academia Mingyang Liu 1, Asuman Ozdaglar 2, Tiancheng Yu 2, Kaiqing Zhang 3 1Institute for Interdisciplinary Information Sciences, Tsinghua University 2LIDS, EECS, Massachusetts Institute of Technology 3University of Maryland, College Park
Pseudocode Yes Algorithm 1 Adaptive Weight-Shrinking
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology.
Open Datasets Yes Beyond sharp theoretical guarantees, regularized algorithms in EFG also have superior performance in practice, which we showcase in this section through numerical experiments in Kuhn Poker (Kuhn, 1950) and Leduc Poker (Southey et al., 2005).
Dataset Splits No The paper conducts experiments in game environments (Kuhn Poker, Leduc Poker) which are not typically partitioned into explicit train/validation/test dataset splits. Therefore, no specific validation split information is provided.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup No The paper mentions that a grid search was used to find parameters but does not specify the concrete hyperparameter values or other system-level training settings in the main text or appendices.