Leveraging Fee-Based, Imperfect Advisors in Human-Agent Games of Trust
Authors: Cody Buntain, Amos Azaria, Sarit Kraus
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To gather this data, we extended Berg s game and conducted a series of experiments using Amazon s Mechanical Turk to determine how humans behave in these potentially adversarial conditions. |
| Researcher Affiliation | Academia | Cody Buntain Department of Computer Science University of Maryland College Park, Maryland 20742 USA Amos Azaria and Sarit Kraus Department of Computer Science Bar-Ilan University, Ramat-Gan, Israel 52900 |
| Pseudocode | No | The paper presents mathematical models and equations, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statements about releasing source code or links to a code repository for the methodology described. |
| Open Datasets | No | The paper describes data collection using Amazon Mechanical Turk from human participants ('hundreds of interactions with human participants') and details participant selection criteria. However, it does not mention a publicly available dataset in the conventional sense, nor does it provide any specific access information (link, citation, repository) for the collected data. |
| Dataset Splits | No | The paper describes a 'priming phase' and 'testing phase' for experiments and specifies conditions like 'one-shot and multi-round experiments' and 'three types of games'. However, it does not provide explicit dataset splits in terms of training, validation, or test percentages or counts, which are typical for machine learning reproducibility. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not specify any software names with version numbers, nor does it mention specific libraries, frameworks, or solvers with their versions. |
| Experiment Setup | Yes | In all cases, the number of advisors was k = 5, advisor noise was Pn = 0.01, and bribery cost was ρb = 0.1. For one-shot games, solicitation cost was constant at ρs = 0.1, but we varied it between ρs = 0.01 and ρs = 0.1 in multi-round games. |