Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games

Authors: Yu Bai, Chi Jin, Huan Wang, Caiming Xiong

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper initiates the theoretical study of sample-efficient learning of the Stackelberg equilibrium, in the bandit feedback setting where we only observe noisy samples of the reward.
Researcher Affiliation Collaboration Yu Bai Salesforce Research yu.bai@salesforce.com Chi Jin Princeton University chij@princeton.edu Huan Wang Salesforce Research huan.wang@salesforce.com Caiming Xiong Salesforce Research cxiong@salesforce.com
Pseudocode Yes Algorithm 1 Learning Stackelberg in bandit games
Open Source Code No The paper does not provide any statement or link regarding the availability of open-source code for the described methodology.
Open Datasets No The paper is theoretical and focuses on sample complexity. It does not use or refer to specific publicly available datasets for training empirical models.
Dataset Splits No The paper is theoretical and does not describe experimental validation dataset splits.
Hardware Specification No The paper does not provide specific hardware details used for running experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not detail concrete experimental setup parameters like hyperparameters or training configurations for empirical runs.