Learning Influence Functions from Incomplete Observations
Authors: Xinran He, Ke Xu, David Kempe, Yan Liu
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic and real-world datasets demonstrate the ability of our method to compensate even for a fairly large fraction of missing observations. |
| Researcher Affiliation | Academia | Xinran He Ke Xu David Kempe Yan Liu University of Southern California, Los Angeles, CA 90089 {xinranhe, xuk, dkempe, yanliu.cs}@usc.edu |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. |
| Open Source Code | No | The paper states: "We use the preprocessed version of the dataset released by Du et al. [3] and available at http://www.cc.gatech.edu/~ndu8/InfluLearner.html." This link is for a dataset and preprocessed version related to a baseline, not the authors' own methodology code. No other explicit statement or link for open-source code for their method was found. |
| Open Datasets | Yes | We further evaluate the performance of our method on the real-world Meme Tracker7 dataset [11]. The dataset consists of the propagation of short textual phrases... We use the preprocessed version of the dataset released by Du et al. [3] and available at http://www.cc.gatech.edu/~ndu8/InfluLearner.html. |
| Dataset Splits | Yes | Subsequently, we generate 8192 cascades as training data... The test set contains 200 independently sampled seed sets... We follow exactly the same evaluation method as Du et al. [3] with a training/test set split of 60%/40%. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory specifications) used for running experiments were mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) were mentioned in the paper. |
| Experiment Setup | Yes | For the model-free approaches (Influ Learner and our algorithm), we use K = 200 features. |