Learnability of Influence in Networks

Authors: Harikrishna Narasimhan, David C. Parkes, Yaron Singer

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We show PAC learnability of influence functions for three common influence models, namely, the Linear Threshold (LT), Independent Cascade (IC) and Voter models, and present concrete sample complexity results in each case. Our results for the LT model are based on interesting connections with neural networks; those for the IC model are based an interpretation of the influence function as an expectation over random draw of a subgraph and use covering number arguments; and those for the Voter model are based on a reduction to linear regression.
Researcher Affiliation Academia Harikrishna Narasimhan David C. Parkes Yaron Singer Harvard University, Cambridge, MA 02138 hnarasimhan@seas.harvard.edu, {parkes, yaron}@seas.harvard.edu. Part of this work was done when HN was a Ph D student at the Indian Institute of Science, Bangalore.
Pseudocode No The paper describes algorithms conceptually, such as
Open Source Code No The paper does not contain any statement about open-source code availability or links to code repositories.
Open Datasets No The paper is theoretical and focuses on learnability guarantees rather than empirical evaluation on specific datasets. It refers to 'training sample' in a theoretical context for PAC learnability without specifying a named or publicly available dataset.
Dataset Splits No The paper discusses theoretical learnability and sample complexity, but it does not provide details on training, validation, or test dataset splits for empirical experiments.
Hardware Specification No The paper is theoretical and does not describe the hardware used for any experiments.
Software Dependencies No The paper is theoretical and does not list any specific software dependencies with version numbers.
Experiment Setup No The paper discusses theoretical algorithms and their properties, but it does not provide specific details about experimental setup, hyperparameters, or training configurations.