Human-Robot Trust and Cooperation Through a Game Theoretic Framework

Authors: Erin Paeng, Jane Wu, James Boerkoel

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical analysis shows that humans tend to trust robots to a greater degree than other humans, while cooperating equally well in both.
Researcher Affiliation Academia Harvey Mudd College, Claremont, CA {epaeng, jhwu, boerkoel}@g.hmc.edu
Pseudocode Yes Algorithm 1: Coin Entrustment
Open Source Code No No explicit statement or link regarding the availability of open-source code for the described methodology was found.
Open Datasets No The data was collected from Amazon's Mechanical Turk, but no specific access information (link, DOI, repository, or formal citation to a public dataset) for the collected dataset is provided.
Dataset Splits No The paper describes a human-robot interaction experiment involving game rounds, not a machine learning setup with explicit training, validation, or test dataset splits.
Hardware Specification No No specific hardware details (GPU/CPU models, memory, or cloud instance specifications) used for running experiments or analysis were mentioned.
Software Dependencies No No specific software dependencies with version numbers were mentioned.
Experiment Setup Yes Our algorithm cooperates on the first round, and defects only if the opponent has defected twice in a row. To explore both the initial emergence of trust and cooperation and its reemergence after a betrayal of trust, our strategy also defects on round 8 if it has not already defected in the previous rounds.