Active Goal Recognition
Authors: Maayan Shvo, Sheila A. McIlraith9957-9966
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate the merits of providing agency to the observer, and the effectiveness of our approach in potentially enhancing the observational power of the observer, as well as expediting and in some cases making possible the recognition of the actor s goal. In this section, we demonstrate the merits of an active observer with a set of experiments. Table 1: Comparison between an active approach (Lines 1-3 of Algorithm 1) and a passive one (Line 3 of Algorithm 1) in various domains using VERED. Each row describes averages over fifteen problems, where the columns stand for number of hypotheses (|G|), total number of observations (|Ofull|), average time in seconds to run the relevant part of Algorithm 1 (T), and convergence to the correct hypothesis (CV ). |
| Researcher Affiliation | Academia | Maayan Shvo, Sheila A. Mc Ilraith Department of Computer Science, University of Toronto, Toronto, Canada Vector Institute, Toronto, Canada {maayanshvo, sheila}@cs.toronto.edu |
| Pseudocode | Yes | Algorithm 1 Require: An active goal recognition problem P = Σ, I, G, τ 1: τ GENERATEOBSPLAN(P ) 2: O EXECUTEOBSERVERPLAN(τ) 3: G RECOGNIZEGOAL( ΣA, IA, G, O ) 4: RETURN G |
| Open Source Code | No | The paper mentions using and referencing third-party open-source tools (e.g., 'RG' from 'https://sites.google.com/site/prasplanning/', 'VERED', and 'FAST DOWNWARD'), but does not provide concrete access to the source code for their own active goal recognition methodology. |
| Open Datasets | Yes | We experimented with seven domains, with six being goal recognition benchmarks taken from an openly available repository based on the benchmarks developed by Ram ırez and Geffner and later extended and published by Pereira and Meneguzzi (2017). The TERRORIST domain was obtained with thanks to the authors. Pereira, R. F., and Meneguzzi, F. 2017. Goal and Plan Recognition Datasets using Classical Planning Domains. Zenodo. https://doi.org/10.5281/zenodo.825878. |
| Dataset Splits | No | The paper describes how observation subsequences are generated for online recognition but does not specify training, validation, or test dataset splits in the context of model training or evaluation typical for machine learning. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions using specific software tools like 'FAST DOWNWARD' and 'PO-PRP', but does not provide their specific version numbers or other software dependencies with version details. |
| Experiment Setup | Yes | We assume unit action cost and only experiment with traces that represent optimal plans executed by the actor. In line 1, in order to construct the PPOS problem R, we augment AO with elimination actions , as described in Section 5. The landmarks are extracted using the landmark generator in the FAST DOWNWARD system (Helmert 2006) which extracts all landmarks (including the orderings between them) given a classical planning task. We enforce non-intervention by removing all actions a AO where EFF(a) includes a landmark (or a negated landmark) l LG for any G G. |