Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Probabilistic Planning with Reduced Models

Authors: Luis Pineda, Shlomo Zilberstein

JAIR 2019 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the experimental results section, we show that each of the two model reduction dimensions contribute significantly to outperforming existing approaches. In this section we present experiments for evaluating the performance of Mk l -REPLAN and Mk l -ANYTIME (Section 7.1), and FF-LAO*-REPLAN (Section 7.2).
Researcher Affiliation Collaboration Luis Pineda EMAIL Facebook AI Research, Montreal, QC, Canada Shlomo Zilberstein EMAIL University of Massachusetts, Amherst, MA, USA
Pseudocode Yes Algorithm 1: Mk l -REPLAN: A continual planning approach for handling more than k exceptions Algorithm 2: Mk l -ANYTIME Algorithm 3: FF-LAO* Algorithm 4: FF-EXPAND Algorithm 5: FF-TEST-CONVERGENCE Algorithm 6: FF-BELLMAN-UPDATE Algorithm 7: FF-LAO*-REPLAN Algorithm 8: GREEDY-LEARN: A greedy method for finding good reduced models Algorithm 9: LEARNING-DET
Open Source Code Yes Source code for reproducing these experiments is available at https://github.com/luisenp/mdp-lib.
Open Datasets Yes To illustrate this issue we experimented with different determinizations of the TRIANGLE-TIREWORLD domain (Little & Thi ebaux, 2007). This problem involves a car traveling between locations on a graph shaped like a triangle (see Figure 4). We evaluated FF-LAO* and LEARNING-DET on a set of problems taken from IPPC 08 (Bryce & Buffet, 2008). Specifically, we used the first 10 problem instances of the following four domains: TRIANGLE-TIREWORLD, BLOCKSWORLD, EX-BLOCKSWORLD, and ZENOTRAVEL.
Dataset Splits No The evaluation methodology was similar to the one used in past planning competitions: we give each planner 20 minutes to solve 50 rounds of each problem (i.e., reach a goal state starting from the initial state).
Hardware Specification Yes All experiments were conducted on an Intel Core i7-6820HQ machine running at 2.70GHz with a 4GB memory cutoff.
Software Dependencies No We used the LRTDP algorithm (Bonet & Geffner, 2003) as the underlying optimal planner, since it has better anytime properties than LAO*. This solver, FF-LAO* (Algorithms 3-6), receives as input an Mk 1-reduction, M = S , A, T , C , s0, k , G i.e., one where a A, |Pa| = 1; an exception bound, k; and an error tolerance, ϵ. We use M to denote the original MDP from which M is derived.
Experiment Setup Yes For the racetrack problem, we used pslip = 0.1 and per = 0.05 (see description in Section 4.1). We evaluate Mk l -REPLAN using two possible sets of primary outcomes, one with l = 1 and one with l = 2, and values of k {0, 1, 2, 3} for each of them. For RFF we used MLO and the Random Goals variant, in which before every call to FF, a random subset (size 100) of the previously solved states are added as goal states. Additionally, we used a probability threshold ρ = 0.2. For SSIPP we used t = 3 and the hadd heuristic. We used a dead-end cap D = 500 throughout our experiments. We initialized values with the non-admissible FF heuristic (Bonet & Geffner, 2005). The evaluation methodology was similar to the one used in past planning competitions: we give each planner 20 minutes to solve 50 rounds of each problem (i.e., reach a goal state starting from the initial state).