Learning Quadratic Games on Networks

Authors: Yan Leng, Xiaowen Dong, Junfeng Wu, Alex Pentland

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Both synthetic and real-world experiments demonstrate the effectiveness of the proposed frameworks, which have theoretical as well as practical implications for understanding strategic interactions in a network environment.
Researcher Affiliation Academia 1Mc Combs School of Business, The University of Texas at Austin, Austin, TX, USA 2Department of Engineering Science, University of Oxford, Oxford, UK 3College of Control Science and Engineering, Zhejiang University, Hangzhou, China 4Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA.
Pseudocode Yes Algorithm 1 Learning Games with Independent Marginal Benefits Algorithm 2 Learning Games with Homophilous Marginal Benefits
Open Source Code No The information is insufficient. The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We consider inferring a social network between households in a village in rural India (Banerjee et al., 2013). data can be accessed via https://atlas.media.mit. edu/en/resources/data/. The voting statistics were obtained via http://www.swissvotes.ch.
Dataset Splits No The information is insufficient. The paper does not explicitly provide specific dataset split information (percentages, sample counts, or citations to predefined splits) for training, validation, or test sets.
Hardware Specification No The information is insufficient. The paper does not provide any specific hardware details (such as GPU models, CPU types, or memory) used for running the experiments.
Software Dependencies Yes In our experiments, we solve the problem of Eq. (5) using the Python software package CVXOPT (Andersen et al., Version 1.2.0. Available at cvxopt.org, 2018).
Experiment Setup Yes In the following and all subsequent analyses, we choose ρ(βG) = 0.6, and fix the parameters in Algorithm 2 to be the ones that lead to the best learning performance. We tune β within the range of β [ 3, 3]. The best performance of Algorithm 1 is obtained with β = 0.1, θ1 = 2 8.5, and θ2 = 21, while that of Algorithm 2 is obtained with β = 2.6, θ1 = 27, and θ2 = 2 5.5.