Cooperation Enforcement and Collusion Resistance in Repeated Public Goods Games
Authors: Kai Li, Dong Hao2085-2092
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Moreover, we experimentally show that these strategies can still promote cooperation even when the opponents are both self-learning and collusive. In the simulation of the repeated PGG, the proposed strategy is run against an opponent group containing rational learning players. Such a simulation can help us understand the performance of the proposed strategy in the real world. The simulation results are shown in Figure 3. |
| Researcher Affiliation | Academia | Kai Li Shanghai Jiao Tong University kai.li@sjtu.edu.cn University of Electronic Science and Technology of China haodong@uestc.edu.cn |
| Pseudocode | Yes | Algorithm 1: A Learning Player s Strategy |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that its source code is open or publicly available. |
| Open Datasets | No | The paper describes a simulated multi-agent game environment rather than using a traditional static dataset for training. Therefore, it does not specify public dataset availability or provide links/citations for such. |
| Dataset Splits | No | The paper simulates a game environment and does not mention training, validation, or test dataset splits in the conventional sense of empirical studies on datasets. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the simulations or experiments. |
| Software Dependencies | No | The paper mentions using an 'average reward reinforcement learning approach (Gosavi 2004)' and provides 'Algorithm 1', but does not specify any software names with version numbers for implementation (e.g., specific libraries, frameworks, or programming language versions). |
| Experiment Setup | Yes | The paper describes the game parameters (e.g., '3-player repeated public goods game with r = 2') and mentions learning rate parameters ('Set the learning rate parameters α, β;') in Algorithm 1, along with initialization steps for Q and R. While specific values for α and β are not given, the mention of these parameters and initialization details constitute explicit setup information. |