Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

PAC Optimal MDP Planning with Application to Invasive Species Management

Authors: Majid Alkaee Taleghan, Thomas G. Dietterich, Mark Crowley, Kim Hall, H. Jo Albers

JMLR 2015 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on two benchmark problems and two instances of an invasive species management problem show that the improved confidence intervals and the new search heuristics yield reductions of between 8% and 47% in the number of simulator calls required to reach near-optimal policies.
Researcher Affiliation Academia Majid Alkaee Taleghan EMAIL Thomas G. Dietterich EMAIL Mark Crowley EMAIL School of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR 97331 Kim Hall EMAIL Department of Forest Ecosystems and Society, Oregon State University, Corvallis, OR 97331 H. Jo Albers EMAIL Haub School of Environment and Natural Resources and Department of Economics and Finance, University of Wyoming, Laramie, WY 82072
Pseudocode Yes Algorithm 1: Fiechter(s0,γ,F,ε,δ) Algorithm 2: UPPERP(s,a,δ,M0) Algorithm 3: MBIE-reset(s0,γ,F,H,ε,δ) Algorithm 4: DDV (s0,γ,F,ε,δ)
Open Source Code No The paper provides a link for the invasive species simulator: "Code for the simulator can be obtained from http://2013.rl-competition.org/domains/invasive-species." However, this is for the *simulator environment* used in the experiments, not for the *methodology* (DDV algorithms) described in the paper.
Open Datasets No The paper describes experiments on
Dataset Splits No The paper conducts experiments on MDPs (Markov Decision Processes), which are simulator-based environments, not static datasets. Therefore, the concept of explicit training/test/validation splits for a dataset is not applicable or mentioned.
Hardware Specification No The paper mentions "CPU time consumed per simulator call" but does not specify any particular CPU models, GPUs, or other hardware used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software components, libraries, or programming languages used in the implementation of the described methods.
Experiment Setup Yes The discount factor was set to 0.9 in all four MDPs. Each algorithm was executed for one million simulator calls. Instead of performing dynamic programming updates (for extended value iteration and occupancy measure computation) after every simulator call, we computed them on the following schedule. For MBIE-reset, we performed dynamic programming after each complete trajectory. For DDV-OUU and DDV-UPPER, we performed dynamic programming after every 10 simulator calls. ... For problems where the value V (s0) of the optimal policy is known, we define ε = αV (s0) and plot the required sample size as a function of α. For the tamarisk problems, where V (s0) is not known, we define ε = αRmax and again plot the required sample size as a function of α.