Incentive-Boosted Federated Crowdsourcing
Authors: Xiangping Kang, Guoxian Yu, Jun Wang, Wei Guo, Carlotta Domeniconi, Jinglin Zhang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results confirm that i Fed Crowd can complete secure crowdsourcing projects with high quality and efficiency. Extensive simulations are conducted to demonstrate that i Fed Crowd can motivate workers to complete secure crowdsourcing projects with high quality and efficiency. We conduct a comparison of its performance with two baselines, namely Random and MAX. Experiment with Real Crowdsourcing Project |
| Researcher Affiliation | Academia | 1School of Software, Shandong University, Jinan, China 2SDU-NTU Joint Centre for AI Research, Shandong University, Jinan, China 3Department of Computer Science, George Mason University, Fairfax, VA, USA 4School of Control Science and Engineering, Shandong University, Jinan, China |
| Pseudocode | Yes | Algorithm 1 summarizes the pseudo-code of i Fed Crowd. Algorithm 1: i Fed Crowd: incentive-boosted Federated Crowdsourcing |
| Open Source Code | Yes | The code of i Fed Crowd is shared at www.sduidea.cn/codes.php?name=i Fed Crowd. |
| Open Datasets | Yes | We used a real-world dataset called Fit Rec (Ni, Muhlstein, and Mc Auley 2019) for experiments. |
| Dataset Splits | No | The paper mentions using a dataset and training a model but does not specify any train/validation/test splits, percentages, or cross-validation methods to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU/CPU models, memory, cloud instance types). |
| Software Dependencies | No | The paper mentions 'We implement i Fed Crowd with the Mindspore deep learning framework.' but does not provide a version number for Mindspore or any other software dependencies. |
| Experiment Setup | No | The paper states model architecture ('A single layer LSTM followed by a fully connected layer'), and parameters for its game model (α, β, γ, δ ranges), but it lacks specific hyperparameters for the deep learning model training (e.g., learning rate, batch size, epochs, optimizer settings). |