On the Convergence of the Shapley Value in Parametric Bayesian Learning Games

Authors: Lucas Agussurja, Xinyi Xu, Bryan Kian Hsiang Low

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness of our framework is demonstrated with experiments using real-world data. ... 5. Experimental Results In this section, our framework is empirically verified by showing that the Shapley values converge over time in several 2-player scenarios, followed by multi-player scenarios.
Researcher Affiliation Academia 1Department of Computer Science, National University of Singapore, Singapore. 2Institute for Infocomm Research, A STAR, Singapore.
Pseudocode No The paper describes steps for its framework in paragraph text but does not include formal pseudocode or an algorithm block.
Open Source Code Yes Our code is publicly available at https://github.com/Xinyi YS/ Parametric-Bayesian-Learning-Games.
Open Datasets Yes We will investigate parameter estimation for BLR on three real-world datasets: California housing (Cali H) data (Pace & Barry, 1997), King County house sales prediction (Harlfoxem, 2016), and age estimation from facial images (Zhang et al., 2017); and mean estimation in a learned 2-dimensional latent space of a VAE on the digits 0 and 1 of MNIST.
Dataset Splits No The paper describes how data points are sampled and used by players (e.g., 'P1 samples P1 sample size data points...'), and discusses the number of iterations, but does not provide explicit training, validation, or test dataset splits (e.g., percentages or counts for entire datasets).
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications).
Software Dependencies No The paper mentions training models like 'deep neural networks' and 'variational auto-encoder (VAE)', but does not specify the versions of software dependencies (e.g., Python, PyTorch, TensorFlow, specific libraries).
Experiment Setup Yes Implementation of framework in Section 4. Starting with mi initial data points for Pi (i.e., player i), we perform Bayesian inference to obtain the (joint) posterior mean θ (as an estimate of the true parameter) to be used for approximating the Fisher information ˆIi which is in turn used to determine the number ri of data points to collect for the next iteration as ri = ri |Ii I 1 i |1/k where i := argmaxi N |Ii| and the number ri of data points to collect for the player with the higher/highest Fisher information follows a preset constant, and update mi mi + ri.