Learning Diffusion using Hyperparameters
Authors: Dimitris Kalimeris, Yaron Singer, Karthik Subbian, Udi Weinsberg
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we use large-scale diffusion data from Facebook to show that a hyperparametric model using approximately 20 features per node achieves remarkably high accuracy. Lastly, we show that the hyperparametric approach does work in practice. To do so, we ran experiments on large scale cascades recorded on the Facebook social network. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science, Harvard University 2Facebook, Menlo Park. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described, nor does it state that the code is available in supplementary materials or elsewhere. |
| Open Datasets | Yes | Real Graphs: We also use the ego-facebook , wiki-Vote , bitcoin-otc and bitcoin-alpha datasets from (Leskovec & Krevl, 2014), which are publicly available real-world social networks, enabling the reproducibility of our experiments. Leskovec, Jure and Krevl, A. Snap datasets: Stanford large network dataset collection. 2014. URL http://snap. stanford.edu/data. |
| Dataset Splits | No | The paper mentions generating 100,000 samples for the training set but does not provide specific details on training/validation/test splits, nor does it explicitly mention using a separate validation set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers needed to replicate the experiment. |
| Experiment Setup | Yes | Subsequently, we generate 100,000 samples, and attempt to solve the optimization problem (2) using SGD, initializing the hyperparameter to 0 and using a learning rate of 1/ T, where T is the size of the training set. |