Correlated Cascades: Compete or Cooperate
Authors: Ali Zarezade, Ali Khodadadi, Mehrdad Farajtabar, Hamid Rabiee, Hongyuan Zha
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on synthetic and two real datasets gathered from Twitter, URL shortening and music streaming services, illustrate the superior performance of the proposed model over the alternatives. |
| Researcher Affiliation | Academia | Sharif University of Technology, Azadi Ave, Tehran, Iran Georgia Institute of Tech., North Ave NW, Atlanta, GA 30332, United States |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Implementation codes and datasets can be found at https: //github.com/alikhodadadi/C4. |
| Open Datasets | Yes | We use the data crawled from Twitter (Hodas and Lerman 2014). |
| Dataset Splits | No | We set aside the last 20% of the data for the test set. The models are trained five times with 20% to 100% of the train data and β found by cross-validation. This describes test split and cross-validation for hyperparameter tuning, but not an explicit validation split percentage or count separate from the training data. |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models or memory amounts) used for running the experiments are provided in the paper. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | The parameters of the models were drawn randomly from uniform distribution μi,p U(0, 0.1) and αi,j U(0, 0.01). Also, we set β = 1. In the correlated models, we set β = 0.1, 1, 100 to see the effect of mark function on the competitive or cooperative behavior of the proposed model. We trained 10 models, on 10% to 100% of the synthetic training data. |