Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Faster Non-asymptotic Convergence for Double Q-learning

Authors: Lin Zhao, Huaqing Xiong, Yingbin Liang

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To illustrate our theoretical results, we apply the synchronous double Q-learning to an MDP adapted from (Wainwright, 2019b)... The plotted curves are averaged over 1000 independent runs.
Researcher Affiliation Academia Lin Zhao National University of Singapore EMAIL Huaqing Xiong The Ohio State University EMAIL Yingbin Liang The Ohio State University EMAIL
Pseudocode No The paper describes the update rules mathematically (e.g., equation 2) but does not present them as structured pseudocode or an algorithm block.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the methodology is openly available.
Open Datasets No The paper refers to 'an MDP adapted from (Wainwright, 2019b)' and describes modifications to the reward function, but it does not provide concrete access information (link, DOI, repository, or specific citation for a public dataset) for this MDP or any other dataset.
Dataset Splits No The paper describes averaging results over independent runs in numerical experiments but does not specify training, validation, and test dataset splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the numerical experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers used for the experiments.
Experiment Setup Yes We set the initial conditions as QA = QB = 1.0 with appropriate dimensions...Then from t = 10^3, we switch to a constant stepsize of α = 0.001. The plotted curves are averaged over 1000 independent runs.