Structured Prediction of Network Response

Authors: Hongyu Su, Aristides Gionis, Juho Rousu

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments, we demonstrate that taking advantage of the context given by the actions and the network structure leads SPIN to a markedly better predictive performance over competing methods. In this section, we evaluate the performance of SPIN and compare it with the state-of-the-art methods through extensive experiments. We use two real-world datasets, DBLP and Memetracker, described below. Statistics of the datasets are given in Table 1.
Researcher Affiliation Academia Hongyu Su HONGYU.SU@AALTO.FI Aristides Gionis ARISTIDES.GIONIS@AALTO.FI Juho Rousu JUHO.ROUSU@AALTO.FI Helsinki Institute for Information Technology (HIIT) Department of Information and Computer Science, Aalto University, Finland
Pseudocode No The paper describes the steps of the GREEDY algorithm in paragraph text (e.g., 'The algorithm starts with an activated vertex set...'), but it does not provide a formal pseudocode block or a clearly labeled algorithm.
Open Source Code No The paper mentions that the implementation for the ICM-EM algorithm is publicly available (footnote 3), but it does not provide access to the source code for their proposed SPIN method.
Open Datasets Yes We use two real-world datasets, DBLP and Memetracker, described below. Statistics of the datasets are given in Table 1. DBLP1 dataset is a collection of bibliographic information on major computer science journals and proceedings. 1http://www.informatik.uni-trier.de/ ley/ db/ Memetracker2 dataset is a set of phrases propagated over prominent online news sites in March 2009. 2http://Memetracker.org
Dataset Splits Yes The experimental results are from a five-fold cross validation.
Hardware Specification No The paper does not provide any specific details regarding the hardware used for running the experiments (e.g., GPU/CPU models, memory, or cloud resources).
Software Dependencies No The paper mentions several algorithms and tools (e.g., LDA algorithm, ICM-EM, CPLEX) but does not provide specific version numbers for any software dependencies.
Experiment Setup No The paper mentions some parameters like the regularization slack parameter C and scaling factors λ and β for loss functions. However, it does not provide concrete hyperparameter values or detailed system-level training settings for their primary experiments in the main text (e.g., specific C value, learning rates, batch sizes, number of epochs, or default λ and R values used for the main comparisons).