Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning

Authors: Tengyang Xie, Nan Jiang, Huan Wang, Caiming Xiong, Yu Bai

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper initiates the theoretical study of policy finetuning... We study the policy finetuning problem theoretically in finite-horizon Markov Decision Processes (MDPs) with H time steps, S states, and A actions.
Researcher Affiliation Collaboration Tengyang Xie UIUC tx10@illinois.edu Nan Jiang UIUC nanjiang@illinois.edu Huan Wang Salesforce Research huan.wang@salesforce.com Caiming Xiong Salesforce Research cxiong@salesforce.com Yu Bai Salesforce Research yu.bai@salesforce.com
Pseudocode Yes Algorithm 1 Pessimistic Value Iteration with Reference-Advantage Decomposition (PEVI-ADV)
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets No The paper studies theoretical aspects of Reinforcement Learning in episodic Markov Decision Processes (MDPs) and does not use or provide access information for any publicly available or open dataset.
Dataset Splits No The paper is theoretical and does not describe empirical experiments involving dataset splits (e.g., training, validation, or test splits).
Hardware Specification No The paper is theoretical and does not report on empirical experiments that would require or specify hardware details.
Software Dependencies No The paper is theoretical and focuses on algorithms and their sample complexity, thus it does not provide specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on algorithm design and theoretical analysis, thus it does not provide specific experimental setup details such as hyperparameters or training configurations.