Sample-Optimal Parametric Q-Learning Using Linearly Additive Features

Authors: Lin Yang, Mengdi Wang

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We propose a parametric Q-learning algorithm that finds an approximate-optimal policy using a sample size proportional to the feature dimension K and invariant with respect to the size of the state space. To further improve its sample efficiency, we exploit the monotonicity property and intrinsic noise structure of the Bellman operator, provided the existence of anchor stateactions that imply implicit non-negativity in the feature space. We augment the algorithm using techniques of variance reduction, monotonicity preservation, and confidence bounds. It is proved to find a policy which is ϵ-optimal from any initial state with high probability using e O(K/ϵ2(1 γ)3) sample transitions for arbitrarily large-scale MDP with a discount factor γ (0, 1). A matching information-theoretical lower bound is proved, confirming the sample optimality of the proposed method with respect to all parameters (up to polylog factors). All technical proofs are given in the appendix.
Researcher Affiliation Academia Lin F. Yang 1 Mengdi Wang 1. 1Department of Operations Research and Financial Engineering, Princeton University. Correspondence to: Lin Yang <lin.yang@princeton.edu>, Mengdi Wang <mengdiw@princeton.edu>.
Pseudocode Yes Algorithm 1 Phased Parametric Q-Learning (PPQ-Learning) and Algorithm 2 Optimal Phased Parametric Q-Learning (OPPQ-Learning)
Open Source Code No The paper does not provide an explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper is theoretical and focuses on sample complexity in a generative model setting, not on experiments with a specific public dataset. Therefore, it does not provide concrete access information for a publicly available or open dataset for training.
Dataset Splits No The paper is theoretical and does not conduct empirical experiments, thus it does not provide specific dataset split information (training, validation, test) needed for reproduction.
Hardware Specification No The paper is theoretical and does not conduct empirical experiments, therefore it does not provide specific hardware details (like GPU/CPU models, memory) used for running experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate experiments.
Experiment Setup No The paper is theoretical and does not describe empirical experiments. Therefore, it does not provide specific experimental setup details such as hyperparameter values or training configurations.