Learning Scale-Free Networks by Dynamic Node Specific Degree Prior

Authors: Qingming Tang, Siqi Sun, Jinbo Xu

ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results on both synthetic and real data show that our prior not only yields a scale-free network, but also produces many more correctly predicted edges than existing scale-free inducing prior, hub-inducing prior and the l1 norm.
Researcher Affiliation Academia Qingming Tang1 QMTANG@TTIC.EDU Toyota Technological Institute at Chicago, 6045 S. Kenwood Ave., Chicago, Illinois 60637, USA
Pseudocode Yes Algorithm 1 Update of node ranking; Algorithm 2 Edge rank updating
Open Source Code No No statement providing concrete access to source code or a link to a repository.
Open Datasets Yes Here we use the DREAM5 Network Inference Challenge dataset 1, which is a simulated gene expression data with 806 samples. DREAM5 also provides a ground truth network for this dataset. See (Marbach et al., 2012) for more details. ... To further test our method, we used DREAM5 dataset 3 and 4 respectively. ... See (Marbach et al., 2012) for a detailed description of the two data sets.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, and testing. It states the total number of samples for synthetic and real datasets, but not how they are partitioned.
Hardware Specification No No specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running experiments are provided.
Software Dependencies No The paper mentions various methods (e.g., Glasso, RW, Hub) and algorithms (ADMM) but does not specify any software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, specific library versions).
Experiment Setup Yes Our method uses 2 hyper-parameters: γ and β. Meanwhile, γ is the hyper parameter for the power-law distribution and λ controls sparsity. ... Hence we use γ = 2.5 in the following experiments. ... In our test, we use λ3 = 0.01 to yield the best performance. Besides,we set λ = λ1 = λ2 to produce a graph with a desired level of sparsity.