MIP-Nets: Enabling Information Sharing in Loosely-Coupled Teamwork
Authors: Ofra Amir, Barbara Grosz, Krzysztof Gajos
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach in simulation, showing that it is capable of learning collaboration patterns and sharing relevant information with team members.We evaluated the MIP-DOI algorithm in a simulation environment which uses a collaborative graph coloring problem: Figure 2 shows the precision obtained by each of the algorithms with l = 3, averaged over 10 different graph instances with 5 runs for each graph instance. |
| Researcher Affiliation | Academia | Ofra Amir, Barbara J. Grosz and Krzysztof Z. Gajos Harvard School of Engineering and Applied Sciences {oamir,grosz,kgajos}@seas.harvard.edu |
| Pseudocode | No | The paper describes the MIP-DOI algorithm and provides a formula, but no structured pseudocode or algorithm block is present. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., repository link, explicit statement of code release) for its source code. |
| Open Datasets | No | The paper mentions evaluating on '10 different graph instances' for a collaborative graph coloring problem, but does not provide concrete access information (e.g., specific link, DOI, repository name, formal citation) for these datasets, nor does it identify them as standard public datasets. |
| Dataset Splits | No | The paper does not provide specific dataset split information (e.g., exact percentages, sample counts for train/validation/test, or citations to predefined splits) to reproduce the data partitioning. It mentions '10 different graph instances with 5 runs for each graph instance' but not how data within these instances is split for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts) used for running its experiments. It only mentions 'a simulation environment'. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | We evaluated agents using 3 configurations of MIP-DOI: MIP-DOI-centrality, which only considers objects centrality (α = 1); MIP-DOI-partner, which only considers objects proximity to np (β1 = 1), and MIP-DOI-focus, which only considers objects proximity to of (β2 = 1). We compared these MIP-DOI variations with 4 baselines: an Omniscient agent which has access to the graph structure and chooses objects in proportion to their distance from of, an agent that shares the most frequently changed objects; an agent that shares the most recently changed objects, and an agent that chooses objects randomly. Figure 2 shows the precision obtained by each of the algorithms with l = 3, averaged over 10 different graph instances with 5 runs for each graph instance. |