Emergent Communication of Generalizations
Authors: Jesse Mu, Noah Goodman
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We examine the languages developed for our proposed communication games over two datasets: first, an artificial shape dataset which allows us to evaluate communication over cleanly defined logical concepts; second, a dataset of birds to test agents ability to learn concepts from realistic visual input. |
| Researcher Affiliation | Academia | Jesse Mu Stanford University muj@stanford.edu Noah Goodman Stanford University ngoodman@stanford.edu |
| Pseudocode | No | The paper describes the models and training procedure textually but does not include any explicit pseudocode blocks or algorithm listings. |
| Open Source Code | Yes | For full model and training details and a link to code, see Appendix A. |
| Open Datasets | Yes | We use the Shape World visual reasoning dataset [18] (Figure 2, left). We next use the Caltech-UCSD Birds dataset [40] which contains 200 classes of birds with 40 60 images (Figure 2, right). |
| Dataset Splits | No | The paper specifies test splits (e.g., '20% held out for testing', '100 classes at train and 50 at test') but does not explicitly mention a separate validation set used during training. |
| Hardware Specification | Yes | All models were trained on a single NVIDIA 2080 Ti GPU. |
| Software Dependencies | No | The paper mentions using 'PyTorch' and the 'Transformers library' but does not specify their version numbers. It also mentions the Adam optimizer [15], which is an algorithm. |
| Experiment Setup | Yes | We use the Adam optimizer [15] with learning rate 10−4, and a batch size of 64. |