Towards Effective Planning Strategies for Dynamic Opinion Networks

Authors: Bharath Muppasani, Protik Nag, Vignesh Narayanan, Biplav Srivastava, Michael Huhns

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results demonstrate that the ranking algorithm-based classifiers provide plans that enhance infection rate control, especially with increased action budgets for small networks.
Researcher Affiliation Academia Bharath Muppasani, Protik Nag, Vignesh Narayanan, Biplav Srivastava, and Michael N. Huhns AI Institute and Department of Computer Science University of South Carolina, USA {bharath@email., pnag@email., vignar@, biplav.s@, huhns@}sc.edu
Pseudocode Yes A pseudocode for this ranking algorithm is presented in Algorithm 1 in Appendix A.5.
Open Source Code Yes The code and the datasets developed as part of the analysis presented in this paper can be found in [33]. [33] B. Muppasani, P. Nag, V. Narayanan, B. Srivastava, and M. N. Huhns. Code and datasets for the paper, 2024. Available at: https://github.com/ai4society/InfoSpread-NeurIPS-24.
Open Datasets No The datasets used in related works, such as [17], typically consist of network structures, and no real-time opinion propagation data could be found. Therefore, to evaluate our intervention strategies, we generated two sets of synthetic datasets using the Watts-Strogatz model with the training dataset s configurations.
Dataset Splits No The model parameters yielding the best performance on the validation set are preserved for subsequent evaluation phases.
Hardware Specification Yes We have used two servers to run our experiments. One with 48-core nodes each hosting 2 V100 32G GPUs and 128GB of RAM. Another with 256-cores, eight A100 40GB GPUs, and 1TB of RAM. The processor speed is 2.8 GHz.
Software Dependencies No The development of our supervised learning models, particularly those utilizing graph convolutional networks, leveraged several Python packages instrumental in defining, training, and evaluating our models: torch, torch_geometric, networkx. ... The implementation of our Res Net model and the training process was facilitated by the following Python packages: torch, torch.nn, torch.nn.functional, torch.optim.
Experiment Setup Yes Our SL setup is coupled with a ranking algorithm which is shown in Algorithm 1. We GCN with an input size of 3 (opinion value, degree of node, proximity to source node), a hidden size of 128, and an output size of 1. The model was trained using the Adam optimizer with a learning rate of 0.001 and a binary cross-entropy loss function. The training process involved 1000 epochs, where in each epoch, a graph with 25 nodes was generated. ... The Neural Network model is trained using a variant of Q-learning... The learning rate is set to 5 10^-4, and mean squared error (MSE) loss is utilized... We have used a batch-size of 100 across the experiments. The policy network parameters are optimized using the Adam optimizer, and the target network s parameters are periodically updated to reflect the policy network, reducing the likelihood of divergence. The training process continues for 300 number of episodes...