Scaling Laws for Reward Model Overoptimization
Authors: Leo Gao, John Schulman, Jacob Hilton
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this work, we use a synthetic setup in which a fixed gold-standard reward model plays the role of humans, providing labels used to train a proxy reward model. We study how the gold reward model score changes as we optimize against the proxy reward model using either reinforcement learning or best-of-n sampling. We find that this relationship follows a different functional form depending on the method of optimization, and that in both cases its coefficients scale smoothly with the number of reward model parameters. We also study the effect on this relationship of the size of the reward model dataset, the number of reward model and policy parameters, and the coefficient of the KL penalty added to the reward in the reinforcement learning setup. |
| Researcher Affiliation | Industry | Leo Gao 1 John Schulman 1 Jacob Hilton 1 1Open AI. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | For all experiments, we use pretrained GPT-3 series language models as the initial checkpoint (Brown et al., 2020). All initial policies are trained with supervised fine-tuning (SFT) on human-generated Instruct GPT demonstrations (Ouyang et al., 2022) for 2 epochs. The 6B reward model from Ouyang et al. (2022) is used as the gold RM |
| Dataset Splits | Yes | We generate 100,000 synthetic comparisons and reserve 10% of these as a held out test set for computing the validation loss of RMs. using a validation set of soft labels. We hypothesized that two RMs of equal validation loss would achieve the same robustness against optimization, regardless of the combination of RM size and RM data size. |
| Hardware Specification | No | The paper mentions using GPT-3 series language models but does not specify the hardware (e.g., GPU models, CPU types, or specific cloud instances) used for running its experiments. |
| Software Dependencies | No | The paper mentions using Proximal Policy Optimization (PPO) but does not provide specific version numbers for PPO or any other software dependencies. |
| Experiment Setup | Yes | The KL penalty for all RL experiments is set to 0 except for in Section 3.6. See Appendix C for all other hyperparameters. Table 1: Hyperparameters used throughout the experiments. RM Adam learning rate multiplier 1.67e-2 RM batch size 64 RL Adam learning rate multiplier 4e-3 RL batch size 256 RL PPO clipping parameter 0.2 RL Timesteps per rollout 256 RL minibatches per epoch 128 RL GAE bootstrapping parameter 0.95 |