Learning Efficient Logical Robot Strategies Involving Composable Objects
Authors: Andrew Cropper, Stephen H. Muggleton
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We now describe experiments in which we use Metagol O to learn robot strategies involving composite objects in two scenarios: Postman and Sorter. The experimental goals are (1) to support Theorems 1 and 2, i.e. show that resource complexities of optimal strategies vary depending on whether objects can be composed within a strategy, and (2) show that Metagol O can learn such resource optimal strategies. |
| Researcher Affiliation | Academia | Andrew Cropper and Stephen H. Muggleton Imperial College London United Kingdom {a.cropper13,s.muggleton}@imperial.ac.uk |
| Pseudocode | Yes | Figure 3: Prolog code for generalised meta-interpreter |
| Open Source Code | Yes | Full code for Metagol O together with all materials for the experiments is available at http://ilp.doc.ic.ac.uk/metagol O. |
| Open Datasets | No | The paper describes how training examples were generated through random selection (e.g., "To generate training examples we select a random integer d from the interval [0, 50]..."), but it does not provide concrete access information (link, DOI, citation) to a publicly available or open dataset. |
| Dataset Splits | No | The paper mentions using "5 training and 5 testing examples" for its experiments but does not explicitly state the use of a validation set or provide specific details on how the data was split (e.g., percentages, sample counts, or predefined splits) for reproducibility. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instance types) used to run the experiments. |
| Software Dependencies | No | The paper mentions 'Prolog' and 'Metagol O/D' but does not specify specific version numbers for these or any other software dependencies required to replicate the experiments. |
| Experiment Setup | Yes | To generate training examples we select a random integer d from the interval [0, 50] representing the number of places6. We select a random integer n from the interval [1, d] representing the number of letters. For each letter we select random integers i and j from the interval [1, d] representing the letter s start and end positions. (...) We use 5 training and 5 testing examples. We average resource complexities of learned strategies over 10 trials. |