Best of Both Worlds Policy Optimization
Authors: Christoph Dann, Chen-Yu Wei, Julian Zimmert
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Recent works have built theoretical foundation for them by proving T regret bounds even when the losses are adversarial. Such bounds are tight in the worst case but often overly pessimistic. In this work, we show that in tabular Markov decision processes (MDPs), by properly designing the regularizer, the exploration bonus and the learning rates, one can achieve a more favorable polylog(T) regret when the losses are stochastic, without sacrificing the worst-case guarantee in the adversarial regime. To our knowledge, this is also the first time a gap-dependent polylog(T) regret bound is shown for policy optimization. Specifically, we achieve this by leveraging a Tsallis entropy or a Shannon entropy regularizer in the policy update. Then we show that under known transitions, we can further obtain a first-order regret bound in the adversarial regime by leveraging the log barrier regularizer. |
| Researcher Affiliation | Collaboration | 1Google Research 2MIT Institute for Data, Systems, and Society. Correspondence to: Chen-Yu Wei <chenyuw@mit.edu>. |
| Pseudocode | Yes | Algorithm 1 Policy Optimization |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described. |
| Open Datasets | No | The paper is theoretical and does not involve experiments on datasets, so no information regarding publicly available datasets or access is provided. |
| Dataset Splits | No | The paper is theoretical and does not involve experiments, so no specific dataset split information for training, validation, or testing is provided. |
| Hardware Specification | No | The paper is theoretical and does not describe computational experiments, therefore no hardware specifications are provided. |
| Software Dependencies | No | The paper is theoretical and does not describe an implementation, so no specific software dependencies with version numbers are mentioned. |
| Experiment Setup | No | The paper is theoretical and focuses on algorithm design and proofs, rather than experimental implementation details. It mentions tuning learning rates and designing regularizers as part of the theoretical framework, but these are not concrete experimental setup details like hyperparameter values for an actual run. |