Reactive Versus Anticipative Decision Making in a Novel Gift-Giving Game

Authors: Elias Fern‡ndez Domingos, Juan Burguillo, Tom Lenaerts

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we evaluate whether this conclusion extends also to gift-giving games, more concretely, to a game that combines the dictator game with a partner selection process. The recurrent neural network model used here for dictators, allows them to reason about a best response to past actions of the receivers (reactive model) or to decide which action will lead to a more successful outcome in the future (anticipatory model). We show for both models the decision dynamics while training, as well as the average behavior. We find that the anticipatory model is the only one capable of accounting for changes in the context of the game, a behavior also observed in experiments, expanding previous conclusions to this more sophisticated game.
Researcher Affiliation Academia Elias Fern andez Domingos,1,2,3 Juan Carlos Burguillo,3 and Tom Lenaerts1,2 1 AI lab, Computer Science Department, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium 2 MLG, D epartement d Informatique, Universit e Libre de Bruxelles, Boulevard du Triomphe CP212, 1050 Brussels, Belgium 3 Department of Telematic Engineering. University of Vigo. 36310-Vigo, Spain
Pseudocode No The paper provides mathematical equations and schematic diagrams to describe the models but does not include any pseudocode or algorithm blocks.
Open Source Code Yes The code used to implement the models presented can be found at https://github.com/Socrats/anticipation-matlab.
Open Datasets No The paper describes simulated game environments and receiver strategies (e.g., 'against a receiver with a fixed strategy', 'against receivers with different strategies drawn randomly'), rather than using a pre-existing, publicly available dataset with concrete access information.
Dataset Splits No The paper describes training and testing the models within a simulated environment, but does not specify explicit train/validation/test dataset splits with percentages or sample counts, nor does it refer to standard predefined splits for a dataset.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions that the code is available on GitHub (anticipation-matlab), implying the use of MATLAB, but it does not specify any software versions for MATLAB or any other libraries or dependencies.
Experiment Setup Yes It is important to notice that in both models we selected the best parameter settings, which is l = 1 for the reactive model, and λ = 0.3 and β = 1/0.01 = 100 for the anticipating model.