Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Stable and Envy-free Partitions in Hedonic Games

Authors: Nathanaël Barrot, Makoto Yokoo

IJCAI 2019 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We first show that... an individually stable and justified envy-free partition may not exist and deciding its existence is NPcomplete. Then, we prove that the top responsiveness property guarantees the existence of a Pareto optimal, individually stable, and envy-free partition, but it is not sufficient for the conjunction of core stability and envy-freeness. Finally, under bottom responsiveness, we show that deciding the existence of an individually stable and envy-free partition is NP-complete, but a Pareto optimal and justified envy-free partition always exists.
Researcher Affiliation Academia Nathana el Barrot1,2 and Makoto Yokoo2,1 1Riken, AIP Center 2Kyushu University
Pseudocode Yes Algorithm 1 Extended Top Covering Algorithm
Open Source Code No The paper does not provide any specific link or explicit statement about releasing source code for the methodology described.
Open Datasets No The paper is theoretical and uses abstract examples for proofs, rather than empirical datasets with access information.
Dataset Splits No The paper describes theoretical results and does not mention training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not describe computational experiments requiring specific hardware specifications.
Software Dependencies No The paper is theoretical and does not mention specific software dependencies or their version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with hyperparameters or system-level training settings.