Competition of Distributed and Multiagent Planners (CoDMAP)
Authors: Michal Štolba, Antonín Komenda, Daniel Kovacs
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | As a part of the workshop on Distributed and Multiagent Planning (DMAP) at the International Conference on Automated Planning and Scheduling (ICAPS) 2015, we have organized a competition in distributed and multiagent planning... In this paper we summarize course and highlights of the competition... Each run of a planner in the competition was restricted to 30 minutes on 4 computational cores and 8GB per machine. The metrics used to compare the planners were coverage (number) of solved problems, IPC Score over the plan quality, and IPC score over the planning time. |
| Researcher Affiliation | Academia | Michal ˇStolba and Anton ın Komenda {stolba,komenda}@agents.fel.cvut.cz Department of Computer Science, Faculty of Electrical Engineering, Czech Technical University in Prague, Czech Republic Daniel L. Kovacs daniel.laszlo.kovacs@gmail.com Department of Measurement and Information Systems, Faculty of Electrical Engineering and Informatics Budapest University of Technology and Economics, Hungary |
| Pseudocode | No | No structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures) were found in the paper. |
| Open Source Code | No | The paper references 'The extended BNF can be found at http://agents.fel.cvut.cz/codmap/MA-PDDL-BNF.pdf' and 'The validity and quality of plans was evaluated using the VAL3 tool, which can handle parallel plans and performs the mutex checks.' (with a link to http://www.inf.kcl.ac.uk/research/groups/planning). These links refer to a BNF definition, a poll form, and a third-party validation tool, not the source code for the methodology described in the paper itself. |
| Open Datasets | No | The paper lists benchmark domains used ('BLOCKSWORLD, DEPOT, DRIVERLOG, ELEVATORS08, LOGISTICS00, ROVERS, SATELLITES, SOKOBAN, WOODWORKING, and ZENOTRAVEL, each with 20 problem instances' and 'TAXI and WIRELESS'). However, it does not provide concrete access information (specific link, DOI, repository name, or formal citation with authors/year) for a publicly available or open dataset of these problem instances used in the competition. |
| Dataset Splits | No | The paper describes the use of benchmark domains and problem instances, but does not specify any training, validation, or test dataset splits (percentages, sample counts, or specific splitting methodology) needed to reproduce data partitioning. |
| Hardware Specification | No | The paper states 'Each run of a planner in the competition was restricted to 30 minutes on 4 computational cores and 8GB per machine.' This provides general computational limits but lacks specific hardware details such as exact GPU/CPU models, processor types with speeds, or memory amounts. |
| Software Dependencies | No | The paper mentions formalisms like 'MA-STRIPS', 'STRIPS', 'PDDL', and 'MA-PDDL' and a validation tool 'VAL3'. However, it does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment environment. |
| Experiment Setup | No | The paper states 'Each run of a planner in the competition was restricted to 30 minutes on 4 computational cores and 8GB per machine.' This specifies resource constraints and time limits, but does not provide specific experimental setup details such as concrete hyperparameter values, training configurations, or model initialization settings. |