Neural Networks for Predicting Human Interactions in Repeated Games
Authors: Yoav Kolumbus, Gali Noti
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show on a dataset of normal-form games from experiments with human participants that standard neural networks are able to learn functions that provide more accurate predictions of the players actions than established models from behavioral economics. The networks outperform the other models in terms of prediction accuracy and cross-entropy, and yield higher economic value. |
| Researcher Affiliation | Academia | 1The School of Computer Science and Engineering, Hebrew University of Jerusalem, Israel. 2Racah Institute of Physics, Hebrew University of Jerusalem, Israel. 3Federmann Center for the Study of Rationality, Hebrew University of Jerusalem, Israel. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a direct statement about open-sourcing the code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We use for evaluation the 2x2 two-player game dataset of [Selten and Chmura, 2008] |
| Dataset Splits | Yes | The hyper parameters for the models and the number of layers and size of each layer were optimized on a validation dataset that consisted of 5% of the training sequences. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, processor types) used to run its experiments. |
| Software Dependencies | Yes | All neural network models were implemented in Keras 2.1.4 [Chollet, 2015] and Tensorflow 1.5.0 [Abadi et al., 2015]. |
| Experiment Setup | Yes | The hyper parameters for the models and the number of layers and size of each layer were optimized on a validation dataset that consisted of 5% of the training sequences. We use k = 20 for input sequence length. For MLP, training is performed with dropout regularization with a weight deletion rate of 0.3, and we use the Adam optimizer with a learning rate of 0.0002 and a batch size of 64 sequences. CNN uses the same regularization and optimization methods. |