Learning to Infer Structures of Network Games

Authors: Emanuele Rossi, Federico Monti, Yan Leng, Michael Bronstein, Xiaowen Dong

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our method on three different types of network games using both synthetic and real-world data, and demonstrate its effectiveness in network structure inference and superior performance over existing methods. and 5. Experiments (Section title).
Researcher Affiliation Collaboration Emanuele Rossi 1 2 Federico Monti 1 Yan Leng 3 Michael M. Bronstein 1 4 Xiaowen Dong 4, 1Twitter, London, UK 2Imperial College London, London, UK 3The University of Texas at Austin, Austin, TX, USA 4University of Oxford, Oxford, UK.
Pseudocode No The paper describes the model architecture and mathematical formulations but does not include a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide an explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes The Indian Villages dataset (Banerjee et al., 2013) is a well-known dataset in the economics literature that contains data from a survey of social networks in 75 villages in rural southern Karnataka, a state in India. (...) The Indian Villages dataset can be accessed at https:// doi.org/10.7910/DVN/U3BIHX. and The Yelp Ratings dataset consists of rating of users to business, as well as the social connectivity between user. (...) The Yelp dataset can be accessed at https://www.yelp. com/dataset.
Dataset Splits Yes We use 850 graphs in the training set, 50 graphs for validation and 100 graphs for testing and verify that there is no overlap between them. (Synthetic Data) and 40 are used for training, 3 for validation and 5 for testing. (Indian Villages) and 4250 graphs are used for training, 250 for validation and 500 for testing. (Yelp Ratings).
Hardware Specification Yes We use an AWS p3.16xlarge machine with 8 GPUs.
Software Dependencies No The paper mentions using 'Adam optimiser' and re-implementing 'SKGGM Quic Graphical Lasso' and 'Deep Graph' but does not specify version numbers for programming languages (e.g., Python), deep learning frameworks (e.g., PyTorch), or other key software libraries.
Experiment Setup Yes Our implementation of Nug Ge T uses the sum řK k 1 as the permutation-invariant functions Ü and Ô, and two different 2-layer MLPs for φ and ψ. We use the Adam optimiser (Kingma & Ba, 2015) with a learning rate of 0.001, a batch size of 100 and a patience of 50 epochs. and Table 4. Hyperparameters used for Nu Gge T in all experiments. which lists F=10, F_1=10, H=10, psi num layers=2, psi hidden dim=100.