Fostering Collective Action in Complex Societies Using Community-Based Agents
Authors: Jonathan Skaggs, Michael Richards, Melissa Morris, Michael A. Goodrich, Jacob W. Crandall
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Via simulations and user studies, we evaluate the ability of CAB agents to interact in JHG societies consisting of humans and AI agents. |
| Researcher Affiliation | Academia | Computer Science Department, Brigham Young University, Provo, UT, USA {jbskaggs12, michael.richards256, mel4college}@gmail.com, {mike, crandall}@cs.byu.edu |
| Pseudocode | Yes | Algorithm 1 CAB token allocations in round τ for player i. |
| Open Source Code | Yes | Supporting documentation, results, and code are available at: https://github.com/jakecrandall/IJCAI2024 SM.git |
| Open Datasets | No | The paper uses a custom game environment (JHG) for simulations and user studies, which generates data during interaction. It does not refer to a pre-existing, publicly available dataset in the traditional sense for training. |
| Dataset Splits | No | The paper describes evolutionary simulations and user studies but does not specify traditional train/validation/test dataset splits with percentages or counts for reproduction. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper describes algorithms used (e.g., Louvain Method, genetic algorithm) but does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | Games were played under three conditions: (1) Majority Human (2 CAB agents and 6 humans); (2) Even (4 CAB agents and 4 humans); and Majority Bot (6 CAB agents and 2 humans). Twenty-four people, participating in groups of eight, volunteered for the study. Each participant played three games (lasting 21-25 rounds), one in each condition, such that six games were played in each condition. To mitigate possible learning effects, conditions were counter-balanced across sessions. |