Multi-Winner Contests for Strategic Diffusion in Social Networks
Authors: Wen Shen, Yang Feng, Cristina V. Lopes6154-6162
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments on four real-world social network datasets demonstrate that stakeholders can significantly boost participants aggregated efforts with proper design of competitions. |
| Researcher Affiliation | Academia | Wen Shen, Yang Feng, Cristina V. Lopes University of California, Irvine, California 92697, United States wen.shen@uci.edu, yang.feng@uci.edu, lopes@ics.uci.edu |
| Pseudocode | Yes | Algorithm 1 Multi-Winner Contests Mechanism |
| Open Source Code | No | The paper does not provide an explicit statement or link to the open-source code for the described methodology. |
| Open Datasets | Yes | We used four publicly available datasets: Twitter (Hodas and Lerman 2014), Flickr (Cha, Mislove, and Gummadi 2009), Flixster (Goyal, Bonchi, and Lakshmanan 2011), and Digg (Hogg and Lerman 2012). |
| Dataset Splits | No | The paper describes using datasets for simulation and analysis but does not specify training, validation, or test dataset splits (e.g., percentages or counts for each split). |
| Hardware Specification | Yes | We ran all the experiments on the same 3.7GHz 6-core Linux machine with 32GB RAM. |
| Software Dependencies | No | The paper mentions learning algorithms and models but does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with their versions). |
| Experiment Setup | Yes | We set λ = 0.5 as it was standard in many geometric reward mechanisms. In practice, a stakeholder usually sets ϕ 1 to make profits, but ϕ should be as close to 1 as possible to encourage players to participate. We let ϕ = 1. To encourage players to join, we set µ = 0.9, and φ = ϕ µ = 0.1. Note that η λ/2 = 0.25, we set η = 0.25. For each group of players in each dataset, we varied the noise factors from 0 to 1 with an increment of 0.05. For each result (i.e., a data point) obtained, we ran the respective experiment 20 times. |