Mechanism Design with Predicted Task Revenue for Bike Sharing Systems
Authors: Hongtao Lv, Chaoli Zhang, Zhenzhe Zheng, Tie Luo, Fan Wu, Guihai Chen2144-2151
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using an industrial dataset obtained from a large bike-sharing company, our experiments show that Tru Pre Tar is effective in rebalancing bike supply and demand and, as a result, generates high revenue that outperforms several benchmark mechanisms. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, Shanghai Jiao Tong University, China 2Department of Computer Science, Missouri University of Science and Technology, USA |
| Pseudocode | Yes | Algorithm 1: Tru Pre Tar: a truthful and budget feasible incentive mechanism with predicted task revenue |
| Open Source Code | No | The paper provides a link to its full version on arXiv but does not explicitly state that source code for the methodology is available or provide a direct link to a code repository. |
| Open Datasets | No | We conduct simulation using a real-world dataset obtained from a large bike-sharing company in China called Mobike. The bike riding data cover 8 8 regions of Beijing with each region being 0.6km 0.6km, and are dated from May 10th to 14th, 2017. |
| Dataset Splits | No | The paper refers to using a real-world dataset for simulations but does not specify explicit training/validation/test dataset splits needed for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | Yes | In the experiments, we set the number of users n = 200, and test different location numbers m. The cost of each user ci is drawn from uniform distribution over [0, c] where c = 5. The value of a task is calculated as the difference between the Kullback-Leibler (KL) divergences (Kullback and Leibler 1951) before and after fulfilling the task... The acceptable range h is set as 300m and 600m, respectively. We also test the budget of 50 and 500 where 500 is sufficient while 50 is not. |