Strategyproof and Approximately Maxmin Fair Share Allocation of Chores

Authors: Haris Aziz, Bo Li, Xiaowei Wu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We present both positive and negative results on the level of MMS approximation that can be guaranteed if we require the algorithm to be strategyproof. Our results uncover some interesting contrasts between the approximation ratios achieved for chores versus goods. First, for cardinal and ordinal models, we design a deterministic sequential picking algorithm Sequ Pick, which is strategyproof and unexpectedly achieves an approximation of O(log m n)1.
Researcher Affiliation Academia Haris Aziz1 , Bo Li2 and Xiaowei Wu3 1UNSW Sydney and Data61 CSIRO, Australia 2Department of Computer Science, Stony Brook University, USA 3Faculty of Computer Science, University of Vienna, Austria
Pseudocode No The paper describes algorithms in prose (e.g., "Sequ Pick. Fix a sequence of integers a1, . . . , an such that P i n ai = m. Order the agents arbitrarily. For i = n, n 1, . . . , 1, let agent i pick ai items from the remaining items.") but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide explicit statements or links for open-source code for the described methodology.
Open Datasets No This paper focuses on theoretical analysis and algorithm design rather than empirical studies using datasets, so there is no mention of dataset availability.
Dataset Splits No This paper focuses on theoretical analysis and algorithm design, not empirical experiments that would require dataset splits.
Hardware Specification No This paper is theoretical and does not describe experiments run on specific hardware.
Software Dependencies No The paper is theoretical and does not mention specific software dependencies with version numbers.
Experiment Setup No This paper focuses on theoretical analysis and algorithm design, not empirical experiment setups with hyperparameters or training configurations.