On Misinformation Containment in Online Social Networks
Authors: Amo Tong, Ding-Zhu Du, Weili Wu
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the proposed algorithm by experiments. Our goal is to examine the performance of ALG. 2 by (a) comparing it to baseline methods and (b) measuring the data-dependent approximation ratio given in Theorem 5. Our experiments are performed on a server with a 2.2 GHz eight-core processor. |
| Researcher Affiliation | Academia | Guangmo (Amo) Tong Department of Computer and Information Sciences University of Delaware amotong@udel.edu Weili Wu Department of Computer Science University of Texas at Dallas weiliwu@utdallas.edu Ding-Zhu Du Department of Computer Science University of Texas at Dallas dzdu@utdallas.edu |
| Pseudocode | Yes | Algorithm 1 Greedy scheme |
| Open Source Code | No | The paper discusses various techniques and datasets, but it does not contain an explicit statement about the availability of its own source code, nor does it provide a link to a code repository for the methodology described. |
| Open Datasets | Yes | The first dataset, collected from Twitter, is built after monitoring the spreading process of the messages posted between 1st and 7th July 2012 regarding the discovery of a new particle with the features of the elusive Higgs boson [17]. ... The second dataset, denoted by Hep Ph, is a citation graph from the e-print ar Xiv with 34,546 papers [23]. ... [17] De Domenico, Manlio, et al. 'The anatomy of a scientific rumor.' Scientific reports 3 (2013): 2980. [23] J. Leskovec and A. Krevl. (Jun. 2014). SNAP Datasets: 1071 Stanford Large Network Dataset Collection. [Online]. Available: 1072 http://snap.stanford.edu/data |
| Dataset Splits | No | The paper describes datasets used (Higgs-10K, Higgs-100K, Hep Ph) but does not provide specific train/validation/test split percentages, sample counts, or methodologies for these datasets. |
| Hardware Specification | Yes | Our experiments are performed on a server with a 2.2 GHz eight-core processor. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers, such as programming languages, libraries, or frameworks (e.g., 'Python 3.8', 'PyTorch 1.9'). |
| Experiment Setup | Yes | On Higss-10K, the probability of edge (u, v) is set to be proportional to the frequency of the activities between u and v. In particular, we set p(u,v) as ai amax pmax + pbase, where ai is the number of activities from u to v, amax is the maximum number of the activities among all the edges, and, pmax = 0.2 and pbase = 0.4 are two constants. On Higgs-100K, we adopt the uniform setting where the propagation probability on each edge is set as 0.1. On Hep Ph, we adopt the wighted cascade setting and set p(u,v) as 1/deg(v) where deg(v) is the number of in-neighbors of v. ... For each existing cascade, the size of the seed set is set as 20 ... The budget of P is enumerated from {1, 2, ..., 20} ... The cascade priority at each node is assigned randomly by generating a random permutation over {1, 2, 3}. ... the function value is estimated by 5,000 Monte Carlo simulations whenever f M is called, and the final solution of each algorithm is evaluated by 10,000 simulations. |