Influence Function Learning in Information Diffusion Networks
Authors: Nan Du, Yingyu Liang, Maria Balcan, Le Song
ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide both theoretical and empirical analysis for our approach, showing that the proposed approach can provably learn the influence function with low sample complexity, be robust to the unknown diffusion models, and significantly outperform existing approaches in both synthetic and real world data. |
| Researcher Affiliation | Academia | Nan Du, Yingyu Liang {DUNAN,YLIANG39}@GATECH.EDU Maria-Florina Balcan NINAMF@CC.GATECH.EDU Le Song LSONG@CC.GATECH.EDU College of Computing, Georgia Institute of Technology, 266 Ferst Drive, Atlanta, 30332 USA |
| Pseudocode | Yes | Algorithm 1 summarizes algorithm for learning the influence function. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for its described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We further evaluate the performance of our proposed method on the Meme Tracker dataset which includes 300 million blog posts and articles collected from 5,000 active media sites between March 2011 and February 2012 (Leskovec et al., 2009). |
| Dataset Splits | Yes | For the training set, we independently sample 1,024 source sets, and independently generate 8 to 128 cascades for each source set. The test set contains 128 independently sampled source sets with the ground truth influence estimated from 10,000 simulated cascades. ... We split each set of cascades into 60%-train and 40%-test. |
| Hardware Specification | Yes | We arbitrarily divide the 1,024 independent learning problems into 32 individual jobs running on a cluster of 32 cores (AMD Opteron(tm) Processor, 2.5GHz). |
| Software Dependencies | No | The paper mentions tools like NETRATE and a method by Netrapalli & Sanghavi, but it does not provide specific version numbers for any software components or libraries used in the experiments. |
| Experiment Setup | No | The paper describes the data generation process and data splitting, but it does not provide specific details such as hyperparameters (e.g., learning rate, batch size, number of epochs) or other system-level training settings for its algorithms. |