Sequential Blocked Matching

Authors: Nicholas Bishop, Hau Chan, Debmalya Mandal, Long Tran-Thanh4834-4842

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We first consider the offline SBM setting, where the strategic agents are aware of their true preferences. We measure the performance of any policy by distortion, the worst-case multiplicative approximation guaranteed by any policy. For the setting with s services, we establish lower bounds of Ω(s) and Ω( s) on the distortions of any deterministic and randomised mechanisms, respectively. We complement these results by providing approximately truthful, measured by incentive ratio, deterministic and randomised policies based on random serial dictatorship which match our lower bounds. Our results show that there is a significant improvement if one considers the class of randomised policies. Finally, we consider the online SBM setting with bandit feedback where each agent is initially unaware of her true preferences, and the planner must facilitate each agent in the learning of their preferences through the matching of services over time. We design an approximately truthful mechanism based on the explore-then-commit paradigm, which achieves logarithmic dynamic approximate regret.
Researcher Affiliation Academia Nicholas Bishop1, Hau Chan2, Debmalya Mandal3, Long Tran-Thanh4 1University of Southampton, 2University of Nebraska-Lincoln, 3Max Planck Institute, 4University of Warwick
Pseudocode Yes The pseudocode for RRSD is given in the full version. [...] The full pseudocode for BRRSD is deferred to the full version.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methods is publicly available.
Open Datasets No The paper is theoretical and does not use real-world datasets for training or evaluation. It discusses theoretical 'reward profiles' and 'preference orderings' within its model.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with dataset splits. It does not mention train/validation/test splits.
Hardware Specification No The paper is theoretical and does not conduct empirical experiments, thus no hardware specifications for running experiments are mentioned.
Software Dependencies No The paper is theoretical and does not describe empirical experiments requiring specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on model design, analysis, and proofs. It does not include an empirical experimental setup with hyperparameters or specific training configurations.