Multiagent Metareasoning through Organizational Design
Authors: Jason Sleight, Edmund Durfee
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical evaluation confirms that our process generates organizational designs that impart a desired metareasoning regime upon the agents. In Section 5, we empirically evaluate our organizational design algorithm, and find that our ODP finds good organizational designs that impart a target metareasoning regime upon the agents. |
| Researcher Affiliation | Academia | Jason Sleight and Edmund H. Durfee Computer Science and Engineering University of Michigan Ann Arbor, MI 48109 {jsleight,durfee}@umich.edu |
| Pseudocode | No | The paper describes algorithms and methodologies in prose, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating the availability of open-source code for the described methodology. |
| Open Datasets | No | To illustrate a problem of this type, we reuse a simplified firefighting scenario (Sleight and Durfee 2013), where firefighting agents and fires to be fought are in a simulated grid world. This indicates a custom domain from previous work by the authors, with no explicit public access information provided for the dataset used in the experiments. |
| Dataset Splits | No | The paper mentions '300 samples' for training and '1500 test problem episodes' for evaluation, but it does not specify explicit training, validation, and test splits with percentages or sample counts for a single dataset, nor does it refer to predefined splits or cross-validation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | Yes | Agents create their optimal local policies with respect to their organizationally augmented local model using CPLEX (IBM 2012) to solve a linear program (Kallenberg 1983). IBM. 2012. IBM ILOG CPLEX. |
| Experiment Setup | Yes | Our experiments use the firefighting domain as previously described in Section 2, where in each episode there are: two agents, who always begin in the initial locations in Figure 1; two fires, each with initial intensity independently and uniformly selected from {1, 2, 3}, and with a uniformly random, but distinct location; delay in each cell independently and uniformly chosen from [0, 1]; and a time horizon of 10. We present results across b values such that at extremely costly reasoning (b = 1E4) the ODP designs an organization where the agents only consider executing a single action (FF in this case), and at extremely low reasoning costs (b = 1E8) designs an organization where every action the ODP expects an agent to ever want to execute is included. |