Landmark-Based Heuristics for Goal Recognition
Authors: Ramon Pereira, Nir Oren, Felipe Meneguzzi
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically evaluate these heuristics over both standard goal/plan recognition problems, and a set of very large problems. |
| Researcher Affiliation | Academia | Ramon Fraga Pereira, Nir Oren, Felipe Meneguzzi Pontiļ¬cal Catholic University of Rio Grande do Sul, Brazil ramon.pereira@acad.pucrs.br felipe.meneguzzi@pucrs.br University of Aberdeen, United Kingdom n.oren@abdn.ac.uk |
| Pseudocode | Yes | Algorithm 1 Compute Achieved Landmarks From Observations. Input: I initial state, G set of candidate goals, O observations, and LG goals and their extracted landmarks. Output: A map of goals to their achieved landmarks. |
| Open Source Code | No | The paper mentions using 'open-source planners, such as FAST-DOWNWARD, FAST-FORWARD, and LAMA', but it does not provide a link or statement about open-sourcing its own developed methodology's code. |
| Open Datasets | Yes | We empirically evaluate our approach using datasets created using 15 domains from the planning literature1. http://ipc.icaps-conference.org |
| Dataset Splits | No | The paper describes evaluating on datasets with 'partial observation sequences represent plans for G where 10%, 30%, 50% or 70% of actions are observed', which relates to observation completeness, not a train/validation/test split of the dataset itself. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., CPU/GPU models, memory). |
| Software Dependencies | No | The paper mentions using 'open-source planners, such as FAST-DOWNWARD, FAST-FORWARD, and LAMA', but it does not specify the version numbers for these or any other software dependencies. |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as hyperparameter values, training configurations, or system-level settings. |