Online Learning of Partitions in Additively Separable Hedonic Games

Authors: Saar Cohen, Noa Agmon

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical First, we prove that no-static regret is hard to approximate. We show that, if we have an efficient learner, then we can devise an efficient algorithm that approximately solves an offline variant of our problem, proven to be hard to approximate due to Bil o et al. [2022]. Our result also indicates constant approximation to no-static regret is unattainable in polynomial time. Yet, for a fractional relaxation of our problem, we devise an algorithm that simultaneously gives the optimal static and dynamic regret. We then present a rounding scheme with an optimal dynamic regret, which converts our algorithm s output into a solution for our original problem.
Researcher Affiliation Academia Saar Cohen and Noa Agmon Department of Computer Science, Bar-Ilan University, Israel saar30@gmail.com, agmon@cs.biu.ac.il
Pseudocode Yes Algorithm 1 : Online Gradient Descent for k-FOP
Open Source Code No The paper does not provide an explicit statement or a link to open-source code for the described methodology.
Open Datasets No The paper is theoretical and does not use or refer to publicly available datasets in the context of empirical training or evaluation. It discusses abstract 'sequences of agent sets' and 'joint disutility profiles' for theoretical analysis.
Dataset Splits No The paper is theoretical and does not involve empirical data splits (training, validation, test) for experimental reproduction.
Hardware Specification No The paper is theoretical and does not describe any specific hardware used for experiments.
Software Dependencies No The paper is theoretical and does not list any specific software dependencies or version numbers for implementation.
Experiment Setup No The paper is theoretical and focuses on algorithm design and proofs, rather than providing details of an empirical experimental setup with hyperparameters or training configurations.