Multi-Agent Cooperation and the Emergence of (Natural) Language
Authors: Angeliki Lazaridou, Alexander Peysakhovich, Marco Baroni
ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that two networks with simple configurations are able to learn to coordinate in the referential game. We further explore how to make changes to the game environment to cause the word meanings induced in the game to better reflect intuitive semantic properties of the images. |
| Researcher Affiliation | Collaboration | Angeliki Lazaridou1 , Alexander Peysakhovich2, Marco Baroni2,3 1Google Deep Mind, 2Facebook AI Research, 3University of Trento angeliki@google.com, {alexpeys,mbaroni}@fb.com |
| Pseudocode | No | The paper describes the game framework and training details in prose, but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any explicit statements or links indicating that its source code is publicly available. |
| Open Datasets | Yes | We randomly sample 100 images of each concept from Image Net (Deng et al., 2009). |
| Dataset Splits | No | The paper mentions 'a total of 50k iterations (games)' for training and 'a set of 10k games' for testing, but no explicit validation set or split percentages. |
| Hardware Specification | No | The paper does not specify the hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'VGG Conv Net' but does not specify version numbers for any software, libraries, or frameworks used. |
| Experiment Setup | Yes | General Training Details We set the following hyperparameters without tuning: embedding dimensionality: 50, number of filters applied to embeddings by informed sender: 20, temperature of Gibbs distributions: 10. We explore two vocabulary sizes: 10 and 100 symbols. |