Admissible Abstractions for Near-optimal Task and Motion Planning
Authors: William Vega-Brown, Nicholas Roy
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We implemented AAA* and the abstractions described in sections 4.1 and 4.2 in the Python programming language. We then compared the performance of the planner with the original angelic A* search algorithm [Marthi et al., 2008] and with a search without abstraction using A*. In the navigation domain, we constructed a random discretization with 10^4 states. Examples of the search trees constructed by A* and by AAA* are given in figure 3. By using the abstraction, the algorithm can avoid exploring large parts of the configuration space. Our quantitative results bear this out: using abstraction allows us to reduce the number of states explored by a factor of three and the number of plans considered by several orders of magnitude. |
| Researcher Affiliation | Academia | William Vega-Brown and Nicholas Roy Massachusetts Institute of Technology {wrvb, nickroy}@mit.edu |
| Pseudocode | Yes | Algorithm 1 Approximate Angelic A* |
| Open Source Code | No | The paper mentions an arXiv link for an extended version, but does not provide any explicit statement about releasing code or a direct link to a code repository for the methodology. |
| Open Datasets | No | The paper mentions 'a random discretization with 10^4 states' and '10^4 sampled configurations' which implies generated data, and does not provide any links, DOIs, or citations for a publicly available dataset. |
| Dataset Splits | No | The paper does not provide specific details on training, validation, or test dataset splits, or refer to standard predefined splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (like GPU/CPU models or memory) used for running its experiments. |
| Software Dependencies | No | The paper only states that the implementation was done 'in the Python programming language' but does not provide specific version numbers for Python or any other software dependencies or libraries. |
| Experiment Setup | No | The paper describes the algorithm's parameters like the weight 'w' (e.g., w=1, w=2.5) and the number of sampled configurations for problem instances (10^4 states), but it does not provide specific hyperparameters or system-level training settings like learning rates, batch sizes, or optimizer details typically found in experiment setups. |