Socially Intelligent Genetic Agents for the Emergence of Explicit Norms
Authors: Rishabh Agrawal, Nirav Ajmeri, Munindar Singh
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We run simulations of pragmatic, selfish, considerate, and mixed agent societies... We report results for each simulation run eight times for 10,000 timesteps. |
| Researcher Affiliation | Academia | Rishabh Agrawal1 , Nirav Ajmeri2 and Munindar P. Singh1 1North Carolina State University 2University of Bristol |
| Pseudocode | No | The paper describes the methods in prose (e.g., "Create Match Set", "Cover Context"), but does not present these as structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating the availability of open-source code for the described methodology. |
| Open Datasets | No | This scenario is based on our running example and is implemented using MASON [Luke et al., 2005]. Our simulation consists of a population of agents. The paper describes the simulation setup but does not use or provide a link to a publicly available dataset. |
| Dataset Splits | No | The paper describes a reinforcement learning approach where agents learn from rewards in a simulated environment, but it does not specify explicit training, validation, or test dataset splits in the conventional sense. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used to run the simulations or experiments. |
| Software Dependencies | No | The paper mentions "MASON" and "eXtended Learning Classifiers (XCS)" but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | We report results for each simulation run eight times for 10,000 timesteps. An agent stays at one location for a random number of steps chosen from a Gaussian distribution with a mean of 60 steps and a standard deviation of 30, with the number of steps restricted to the range [30, 90]. At each timestep, an agent calls another agent with a probability chosen from a Gaussian distribution with a mean of 5% and a standard deviation of 1%. |