Engineering Agreement: The Naming Game with Asymmetric and Heterogeneous Agents
Authors: Jie Gao, Bo Li, Grant Schoenebeck, Fang-Yi Yu
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 1) we show that increasing asymmetry in network topology can improve convergence rates. The star graph empirically converges faster than all previously studied graphs; 2) we consider graph topologies that are particularly challenging for naming game such as disjoint cliques or multi-level trees and ask how much extra homogeneity (random edges) is required to allow convergence or fast convergence. We provided theoretical analysis which was confirmed by simulations; 3) we analyze how consensus can be manipulated when stubborn nodes are introduced at different points of the process. |
| Researcher Affiliation | Academia | Jie Gao,1 Bo Li,2 Grant Schoenebeck,2 Fang-Yi Yu2 1Stony Brook University; 2University of Michigan |
| Pseudocode | No | The paper describes the steps of the Naming Game process but does not present them in a formal pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described. |
| Open Datasets | Yes | 2. Regular random graph Gn,k (see Bollob as (1998)): every node has degree k = 8 and the connection is randomly sample under this constrain. 3. Kleinberg s small world model (Kleinberg 2000): in standard Kleinberg s model the nodes are on two dimensional grid. 4. Watts-Strogatz s small world model (Watts and Strogatz 1998): the nodes are on one-dimensional ring, and connect to 8 nearest nodes with respect to Manhattan distance, then we rewire the edges of independently with probability 0.5. |
| Dataset Splits | No | The paper does not provide specific details about training/validation/test dataset splits for reproducibility. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies (e.g., library names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | Unless mentioned otherwise, we will use the same setting defined above in Section 2. From Figure 2 we can see that the star graph converges the fastest. The tree graph is in fact the slowest. If the tree has two levels with 5000 nodes, after 107 steps the nodes still cannot reach consensus. Therefore we did not present the consensus time of the tree in the figure. |