Learning Graphs with a Few Hubs

Authors: Rashish Tandon, Pradeep Ravikumar

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present experimental results demonstrating that our algorithm does indeed succeed in recovering graphs with a few hubs. 4.1. Synthetic Data We first performed structure learning experiments on simulated data using 3 types of graphs: (...) Figure 2 shows plots of the Average Hamming Error (i.e. average number of mismatches from the true graph) with varying number of samples for our method and the baselines, computed over an average of 10 trials. (...) 4.2. Real World Data We ran our algorithm on a data set consisting of US senate voting records data from the 109th congress (2004 2006) (Bannerjee et al., 2008).
Researcher Affiliation Academia Rashish Tandon, Pradeep Ravikumar {RASHISH, PRADEEPR}@CS.UTEXAS.EDU Department of Computer Science The University of Texas at Austin, USA
Pseudocode Yes Algorithm 1 Estimating c Mr,b,λ(D) Algorithm 2 Algorithm to compute neighborhood estimate b N(r), for each node r V , and the overall edge estimate b E
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We ran our algorithm on a data set consisting of US senate voting records data from the 109th congress (2004 2006) (Bannerjee et al., 2008).
Dataset Splits No The paper mentions generating 'n i.i.d. samples' and using 'varying number of samples' but does not specify any explicit training, validation, or test dataset splits or percentages.
Hardware Specification No The paper does not provide any specific details regarding the hardware used for running the experiments, such as GPU/CPU models or memory.
Software Dependencies No The paper mentions comparing with 'ℓ1-estimator (Ravikumar et al., 2010)' and 'reweighted ℓ1-estimator (Liu & Ihler, 2011)' but does not specify versions for any software components used in the experiments.
Experiment Setup Yes In all our experiments, for our algorithm (denoted as SL1 in our plots), the value of N, the number of times to subsample, was fixed to 60. We set lower and upper thresholds on the sufficiency measure as tl = 0.1 and tu = 0.2. The number of subsamples was set to b = min 20 n, n and the set of regularization parameters was taken as Λ = {0.005, 0.01, 0.015, . . . , 1}. (...) Our algorithm was run with the parameters N = 60, tl = 0.1, tu = 0.2, b = 450 and Λ = {0.005, 0.01, 0.015, . . . , 1}.