General Robustness Evaluation of Incentive Mechanism against Bounded Rationality Using Continuum-Armed Bandits

Authors: Zehong Hu, Jie Zhang, Zhao Li6070-6078

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also conduct extensive experiments using various mechanisms to verify the advantages and practicability of our robustness evaluation framework. ... We first validate the reliability and efficiency of our framework in a hypothetical testbed where accurate results are known, and then illustrate its generality and applicability on five popular peer prediction mechanisms and the widely-adopted procurement mechanisms in crowdsourcing. ... Figure 2: (a) Accurate EQ(γ) v.s. results of CABS; (b) Average regrets comparison of different algorithms
Researcher Affiliation Collaboration 1Alibaba Group, Hangzhou, China 2School of Computer Engineering, Nanyang Technological University, Singapore
Pseudocode Yes Algorithm 1: Simulation Scheme-I Algorithm 2: Simulation Scheme-II Algorithm 3: Bounded Rationality Level Exploration
Open Source Code No The paper does not provide any statement about making its source code publicly available or a link to a code repository.
Open Datasets No The paper conducts experiments on hypothetical testbeds and simulations with defined parameters (e.g., "agent s type θ follows the uniform distribution U[0.1, 10]", "workers true desired wage θ follows Be(ψ1, ψ2)") rather than using pre-existing, publicly available datasets with specific access information (links, DOIs, or formal citations).
Dataset Splits No The paper describes validating its *solver* and framework, but it does not specify train/validation/test splits for any datasets. The experiments are based on simulated scenarios with defined distributions.
Hardware Specification No The paper does not provide any details about the specific hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, specific solvers or libraries) needed to replicate the experiments.
Experiment Setup Yes For the hypothetical testbed, we consider a simple scenario where only one agent exists. Suppose agent s type θ follows the uniform distribution U[0.1, 10]. Given θ, agent s action a follows the beta distribution Be(θ, t), where t = 1 2 log(1 γ). The performance measurement g is set as g(γ) = a, and the performance robustness RP is calculated using our framework. In this case, EQ(γ) can be theoretically calculated as 1 C0 + t 9.9 log t+0.1 t+10 . Figure 2a shows the computation results of CABS when C0 = 0.5. ... we keep δ as 0.2 to ensure the reliability of the evaluation results. ... For bounded rationality models, we consider the malicious bad-mouthing agents that intentionally report 0 (Du, Huang, and others 2013). ... We assume workers true desired wage θ follows Be(ψ1, ψ2). Besides, worker s bounded rational report a = θ + (1 θ) δ, where δ Be(1, 1.5 log(γ)) and γ [0, 1] can be regarded as workers bounded rational level4. Moreover, since truthful reporting is always the dominant strategy in BFM, RI 1. ... Here, n denotes the number of participating workers, and the total budget is set as 75.