Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Fostering Collective Action in Complex Societies Using Community-Based Agents

Authors: Jonathan Skaggs, Michael Richards, Melissa Morris, Michael A. Goodrich, Jacob W. Crandall

IJCAI 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Via simulations and user studies, we evaluate the ability of CAB agents to interact in JHG societies consisting of humans and AI agents.
Researcher Affiliation Academia Computer Science Department, Brigham Young University, Provo, UT, USA EMAIL, EMAIL
Pseudocode Yes Algorithm 1 CAB token allocations in round τ for player i.
Open Source Code Yes Supporting documentation, results, and code are available at: https://github.com/jakecrandall/IJCAI2024 SM.git
Open Datasets No The paper uses a custom game environment (JHG) for simulations and user studies, which generates data during interaction. It does not refer to a pre-existing, publicly available dataset in the traditional sense for training.
Dataset Splits No The paper describes evolutionary simulations and user studies but does not specify traditional train/validation/test dataset splits with percentages or counts for reproduction.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments.
Software Dependencies No The paper describes algorithms used (e.g., Louvain Method, genetic algorithm) but does not provide specific software dependencies with version numbers.
Experiment Setup Yes Games were played under three conditions: (1) Majority Human (2 CAB agents and 6 humans); (2) Even (4 CAB agents and 4 humans); and Majority Bot (6 CAB agents and 2 humans). Twenty-four people, participating in groups of eight, volunteered for the study. Each participant played three games (lasting 21-25 rounds), one in each condition, such that six games were played in each condition. To mitigate possible learning effects, conditions were counter-balanced across sessions.