Efficient Phi-Regret Minimization in Extensive-Form Games via Online Mirror Descent

Authors: Yu Bai, Chi Jin, Song Mei, Ziang Song, Tiancheng Yu

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove that, in those settings, the Φ-Hedge algorithms are equivalent to standard Online Mirror Descent (OMD) algorithms for EFGs with suitable dilated regularizers, and run in polynomial time. This new connection further allows us to design and analyze a new class of OMD algorithms based on modifying its log-partition function. In particular, we design an improved algorithm with balancing techniques that achieves a sharp e O(XAT) EFCE-regret under bandit-feedback in an EFG with X information sets, A actions, and T episodes. To our best knowledge, this is the first such rate and matches the information-theoretic lower bound.
Researcher Affiliation Collaboration Yu Bai Salesforce Research yu.bai@salesforce.com Chi Jin Princeton University chij@princeton.edu Song Mei UC Berkeley songmei@berkeley.edu Ziang Song Stanford University ziangs@stanford.edu Tiancheng Yu MIT yutc@mit.edu
Pseudocode Yes Algorithm 1 Φ-Hedge; Algorithm 2 EFCE-OMD (FTRL form; equivalent OMD form in Algorithm 4); Algorithm 3 Balanced EFCE-OMD (FTRL form; equivalent OMD form in Algorithm 5)
Open Source Code No The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets No This paper focuses on theoretical algorithm design and analysis, and does not involve empirical evaluation on datasets. Therefore, no information regarding publicly available datasets or their access is provided.
Dataset Splits No This paper is theoretical and does not conduct experiments involving dataset splits for training, validation, or testing.
Hardware Specification No This paper is purely theoretical and does not describe any experimental setup or hardware used for running experiments.
Software Dependencies No This paper focuses on theoretical algorithm design and analysis, and does not conduct experiments, hence no software dependencies with specific version numbers are provided.
Experiment Setup No This paper is theoretical and does not include details about an experimental setup, hyperparameters, or system-level training settings.