Emergent Communication under Varying Sizes and Connectivities
Authors: Jooyeon Kim, Alice Oh
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This research provides an analytical study of the shared emergent language within the group communication settings of different sizes and connectivities. As the group size increases up to hundreds, agents start to speak dissimilar languages, but the rate at which they successfully communicate is maintained. We observe the emergence of different dialects when we restrict group communication to have local connectivities only. Finally, we provide optimization results of group communication graphs when the number of agents one can communicate with is restricted or when we penalize communication between distant agent pairs. The optimized communication graphs show superior communication success rates compared to graphs with the same number of links, as well as the emergence of hub nodes and scale-free networks. |
| Researcher Affiliation | Collaboration | Jooyeon Kim thingsflow Seoul, Korea jyscardioid@gmail.com Alice Oh KAIST Daejeon, Korea alice.oh@kaist.edu |
| Pseudocode | Yes | The communication algorithm that summarizes the overall procedure can be referenced in Appendix B. |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating the release of open-source code for the methodology described. |
| Open Datasets | Yes | Furthermore, aside from the artificial shape-color object datasets, we also experiment with the real-world CIFAR-10 images with ten classes [Krizhevsky, 2009]. |
| Dataset Splits | Yes | We generate 128,000 observations for a training set and 12,800 observations for test and validation sets. The early stopping rule is applied for the validation set, and the reported results are calculated using the test set. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | The paper mentions software like PyTorch, Adam, and various algorithms (Gumbel-softmax, VAE, t-SNE, CMA-ES) but does not provide specific version numbers for the software dependencies used in their implementation. |
| Experiment Setup | Yes | Agents observe 10 objects with 1 target object and 9 distractors. The size of the decision-action space |Ae| is set to 10. For both discrete and continuous messages, we set the dimensionality of m 2 Am to 10. Discrete messages are binary vectors using the Gumbel-softmax relaxation. Continuous messages are real-valued vectors. Objects observations are RGB pixelbased images with sizes 32 320. Each image contains 10 objects with 6 different colors (red, green, blue, cyan, magenta, yellow) and 5 different shapes (ellipse, triangle, quadrilateral, pentagon, hexagon). Instruction is an 11-dimensional vector that concatenates 6 and 5-dimensional one-hot vectors for shapes and colors. Example object observations are in Appendix A. ... Finally, the model parameter settings are detailed in Appendix C. |