Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games
Authors: Yu Bai, Chi Jin, Huan Wang, Caiming Xiong
NeurIPS 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper initiates the theoretical study of sample-ef๏ฌcient learning of the Stackelberg equilibrium, in the bandit feedback setting where we only observe noisy samples of the reward. |
| Researcher Affiliation | Collaboration | Yu Bai Salesforce Research EMAIL Chi Jin Princeton University EMAIL Huan Wang Salesforce Research EMAIL Caiming Xiong Salesforce Research EMAIL |
| Pseudocode | Yes | Algorithm 1 Learning Stackelberg in bandit games |
| Open Source Code | No | The paper does not provide any statement or link regarding the availability of open-source code for the described methodology. |
| Open Datasets | No | The paper is theoretical and focuses on sample complexity. It does not use or refer to specific publicly available datasets for training empirical models. |
| Dataset Splits | No | The paper is theoretical and does not describe experimental validation dataset splits. |
| Hardware Specification | No | The paper does not provide specific hardware details used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not detail concrete experimental setup parameters like hyperparameters or training configurations for empirical runs. |