Bounded Incentives in Manipulating the Probabilistic Serial Rule

Authors: Zihe Wang, Zhide Wei, Jie Zhang2276-2283

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To complement this worst-case study, we further evaluate an agent s utility gain on average by experiments. The experiments show that an agent incentive in manipulating the rule is very limited. These results shed some light on the robustness of Probabilistic Serial against strategic manipulation, which is one step further than knowing that it is not incentive-compatible. In this section, we present numerical experiments on the extent to which an agent can increase its utility by unilateral manipulation in the Probabilistic Serial mechanism.
Researcher Affiliation Academia Zihe Wang,1 Zhide Wei,2 Jie Zhang3 1Shanghai University of Finance and Economics, China. wang.zihe@mail.shufe.edu.cn 2School of Electronics Engineering and Computer Science, Peking University, China. zhidewei@pku.edu.cn 3Electronics and Computer Science, University of Southampton, U.K. jie.zhang@soton.ac.uk
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any links or explicit statements about the availability of its source code.
Open Datasets No The paper states: "We construct an instance by uniformly at random and independently generating each agent s ordinal preferences." It does not provide concrete access information (link, DOI, citation) to a publicly available dataset.
Dataset Splits No The paper does not provide specific train/validation/test dataset splits. It describes how instances are generated for experiments, but not how they are partitioned for different phases.
Hardware Specification No The paper does not provide any specific hardware details used for running its experiments.
Software Dependencies No The paper does not provide any specific software dependencies or their version numbers.
Experiment Setup Yes We set up our experiments as follows. We set n = m, i.e., the number of agents is equal to the number of items. We vary this number from 8 to 20. For each value of n, we generate 10000 of these instances. We construct an instance by uniformly at random and independently generating each agent s ordinal preferences. We make the manipulator s cardinal preferences dichotomous. We vary the number of items the manipulator is interested in, say k, from 2 to 6. For each of these instances, we enumerate the manipulator s all k! strategies, in order to figure out the largest possible utility the agent can obtain.