Principled Penalty-based Methods for Bilevel Reinforcement Learning and RLHF
Authors: Han Shen, Zhuoran Yang, Tianyi Chen
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our algorithms via simulations in the Stackelberg game and RLHF. In this section, we test the empirical performance of PBRL. |
| Researcher Affiliation | Academia | 1Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, United States 2Department of Statistics and Data Science, Yale University, United States. |
| Pseudocode | Yes | Algorithm 1 PBRL: Penalty-based Bilevel RL Algorithm |
| Open Source Code | No | The paper does not contain an explicit statement about open-sourcing the code for the methodology or provide a link to a code repository. |
| Open Datasets | No | We conduct our experiments in the Arcade Learning Environment (ALE) (Bellemare et al., 2013) through Open AI gym. Here the transition distribution and rewards are randomly generated. The paper describes using publicly available environments but not providing concrete access to datasets generated or used within those environments, or stating that a specific dataset used is publicly available with access details. |
| Dataset Splits | No | The paper describes data collection processes and dynamic buffer usage in RLHF experiments ('collect 576 pairs', 'keep last collected 3000 pairs') but does not specify fixed train/validation/test dataset splits. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run its experiments. |
| Software Dependencies | No | The paper mentions using 'Open AI gym' and 'A2C' as a policy gradient estimator but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | For policy learning, we have actor-critic learning rate 0.0003, entropy coefficient 0.01, actor-critic batch size 16, initial upper-level loss coefficient 0.001 which decays every 3000 actor-critic gradient steps; for reward learning, we set reward predictor learning rate 0.0003, reward predictor batch size 64, and the reward predictor is trained for one epoch every 500 actor-critic gradient steps. For Beamrider, we change actor-critic learning rate to 7 10 5. |