The Interplay of Emotions and Norms in Multiagent Systems
Authors: Anup K. Kalia, Nirav Ajmeri, Kevin S. Chan, Jin-Hee Cho, Sibel Adalı, Munindar P. Singh
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We obtain data from an empirical study involving game play with respect to the above variables. We provide a step-wise process to discover two new Dynamic Bayesian models based on maximizing log-likelihood scores with respect to the data. We compare the new models with the baseline models to discover new insights into the relevant relationships. Our empirically supported models are thus holistic and characterize how emotions influence norm outcomes better than previous approaches. |
| Researcher Affiliation | Collaboration | Anup K. Kalia1 , Nirav Ajmeri2 , Kevin S. Chan3 , Jin-Hee Cho4 , Sibel Adalı5 and Munindar P. Singh2 1IBM Thomas J. Watson Research Center 2North Carolina State University 3US Army Research Lab 4Virginia Polytechnic Institute and State University 5Rensselaer Polytechnic Institute |
| Pseudocode | No | The paper describes processes and models but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing open-source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper states: "To empirically ground our work, we develop a variant (Figure 2) of [Gal et al., 2010] s Colored Trails game." and "We associate our variables to the data collected from game play, chats, and surveys provided by subjects." This indicates they collected their own data from a variant of an existing game, but they do not provide any link, citation for a specific public version of their collected dataset, or repository information for accessing their own collected data. |
| Dataset Splits | Yes | We obtain an AUC score for each model by averaging three AUC scores obtained by performing threefold cross-validation respectively on the data for each model. When the data is inadequate for three folds, we consider the AUC score obtained from fewer folds. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as CPU/GPU models, memory, or cloud instances. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python version, library versions) that would be needed to replicate the experiment. |
| Experiment Setup | No | The paper describes the game rules and data collection process, but it does not specify concrete experimental setup details such as hyperparameters for model training (e.g., learning rates, batch sizes, number of epochs) or system-level training settings. |