First-Choice Maximality Meets Ex-ante and Ex-post Fairness

Authors: Xiaoxi Guo, Sujoy Sikdar, Lirong Xia, Yongzhi Cao, Hanpin Wang

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We provide two novel randomized mechanisms, the generalized eager Boston mechanism (GEBM) and the generalized probabilistic Boston mechanism (GPBM), both of which satisfy ex-post FCM and PE together with different combinations of desirable efficiency and fairness properties as we summarize in Table 1. We also show that no strategyproof mechanism satisfies ex-post PE, EF1, and FCM simultaneously.
Researcher Affiliation Academia 1Key Laboratory of High Confidence Software Technologies (MOE), School of Computer Science, Peking University 2Department of Computer Science, Binghamton University 3Department of Computer Science, Rensselaer Polytechnic Institute 4School of Computer Science and Cyber Engineering, Guangzhou University
Pseudocode Yes Algorithm 1 Generalized Eager Boston Mechanism (GEBM) and Algorithm 2 Generalized Probabilistic Boston Mechanism (GPBM)
Open Source Code No The paper provides a link to its full version on arXiv (https://arxiv.org/abs/2305.04589) but does not explicitly state that source code for the described methodology is available or provide a link to a code repository.
Open Datasets No The paper is theoretical and does not involve empirical evaluation on datasets, thus it does not mention training datasets or their availability.
Dataset Splits No The paper describes theoretical mechanisms and proofs, and therefore does not include information on training, validation, or test dataset splits.
Hardware Specification No The paper focuses on theoretical mechanism design and does not describe any experimental setup or the specific hardware used.
Software Dependencies No The paper presents theoretical algorithms and proofs, and does not specify any software dependencies or version numbers.
Experiment Setup No The paper is theoretical and does not describe experimental setups, hyperparameters, or training configurations.