Optimizing and Learning Diffusion Behaviors in Complex Network

Authors: Xiaojian Wu

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show that by using the dual decomposition technique, we can get a near optimal solution several times faster than the original algorithm. In (Wu, Sheldon, and Zilberstein 2014), we developed a much faster approximate algorithm for solving the stochastic network design problem in which the underlying network structure is a directed tree. The algorithm is proved to be FPTAS. Applying it to the barrier removal problem, we show that in practice our algorithm is much faster than an existing technique (O Hanley and Tomberlin 2005).
Researcher Affiliation Academia Xiaojian Wu School of Computer Science University of Massachusetts Amherst xiaojian@cs.umass.edu
Pseudocode No The paper describes algorithms verbally (e.g., 'a sample average approximation based algorithm is given'), but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets No In our paper (Wu, Sheldon, and Zilberstein 2013a), we propose an algorithm to estimate the spread process of Red-cockaded Woodpecker using logistic regression model. In the data, where and when the birds are observed or unobserved are recorded. But for a lot of locations and time slots, no records are available." This text describes data used but does not provide concrete access information or state its public availability.
Dataset Splits No The paper discusses experiments and data usage but does not provide specific details on how the data was split into training, validation, or testing sets (e.g., percentages, sample counts, or predefined splits).
Hardware Specification No The paper describes experiments and algorithms but does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run these experiments.
Software Dependencies No The paper does not specify any software dependencies (e.g., programming languages, libraries, or solvers) with version numbers that would be needed for replication.
Experiment Setup No The paper discusses the algorithms and their observed performance, but it does not provide specific experimental setup details such as hyperparameter values, model initialization, or training configurations.