Fair Influence Maximization: a Welfare Optimization Approach
Authors: Aida Rahmattalabi, Shahin Jabbari, Himabindu Lakkaraju, Phebe Vayanos, Max Izenberg, Ryan Brown, Eric Rice, Milind Tambe11630-11638
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on synthetic and real world datasets including a case study on landslide risk management demonstrate the efficacy of the proposed framework |
| Researcher Affiliation | Academia | 1University of Southern California 2Harvard University 3 RAND Corporation |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions a full version at 'https://arxiv.org/abs/2006.07906' which is a link to the paper itself, not to source code. There is no explicit statement about releasing code or a link to a code repository for the methodology described. |
| Open Datasets | No | The paper mentions using 'synthetic and real social networks' and a 'stochastic block model (SBM) networks' and 'in-person semi-structured interview data' for a case study on Sitka, Alaska, which was used to estimate SBM parameters. However, it does not provide concrete access information (e.g., links, DOIs, specific citations with author/year for public datasets) for any of these datasets to be publicly accessed. |
| Dataset Splits | No | The paper does not provide specific dataset split information (e.g., percentages, sample counts, or citations to predefined splits) needed to reproduce data partitioning into training, validation, or test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions the Independent Cascade Model but does not specify any software names with version numbers for libraries, frameworks, or solvers used in implementation. |
| Experiment Setup | Yes | We report the average results over 20 random instances and set p = 0.25 in all experiments. ... Figure 2 summarizes results across different budget values K ranging from 2% to 10% of the network size N for our framework (different α values) as well as the baselines. |