On Fair Cost Sharing Games in Machine Learning

Authors: Ievgen Redko, Charlotte Laclau4790-4797

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we take a closer look at a special class of games, known as fair cost sharing games, from a machine learning perspective. We show that this particular kind of games, where agents can choose between selfish behaviour and cooperation with shared costs, has a natural link to several machine learning scenarios including collaborative learning with homogeneous and heterogeneous sources of data. We further demonstrate how the game-theoretical results bounding the ratio between the best Nash equilibrium (or its approximate counterpart) and the optimal solution of a given game can be used to provide the upper bound of the gain achievable by the collaborative learning expressed as the expected risk and the sample complexity for homogeneous and heterogeneous cases, respectively.
Researcher Affiliation Academia Ievgen Redko, Charlotte Laclau Univ Lyon, UJM-Saint-Etienne, CNRS, Institut d Optique Graduate School Laboratoire Hubert Curien UMR 5516 F-42023, Saint-Etienne, France name.surname@univ-st-etienne.fr
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information for open-source code.
Open Datasets No The paper discusses theoretical concepts of learning samples and probability distributions but does not refer to any specific publicly available datasets nor does it provide access information.
Dataset Splits No The paper does not provide specific dataset split information (percentages, sample counts, or citations to predefined splits) needed to reproduce data partitioning, as it focuses on theoretical analysis.
Hardware Specification No The paper does not provide any specific hardware details used for running experiments, as the work is theoretical.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers.
Experiment Setup No The paper does not contain specific experimental setup details, such as hyperparameter values or training configurations, as it is a theoretical paper.