Fast Algorithm for K-Truss Discovery on Public-Private Graphs

Authors: Soroush Ebadian, Xin Huang

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments validate the superiority of our proposed algorithms against state-of-the-art methods on realworld datasets.
Researcher Affiliation Academia Soroush Ebadian1,2 and Xin Huang2 1Sharif University of Technology 2Hong Kong Baptist University soroushebadian@gmail.com, xinhuang@comp.hkbu.edu.hk
Pseudocode Yes Algorithm 1 Node-Insertion Updating Algorithm; Algorithm 2 Node-Insertion Bound Computing Algorithm; Algorithm 3 Hybrid-PP Algorithm
Open Source Code No The paper provides a link: '1https://github.com/samjjx/pp-data' in the 'Datasets' section. This link is for the dataset used and not for the source code of the methodology described in the paper.
Open Datasets Yes Datasets: We used four public-private graphs of PP-DBLP [Huang et al., 2018] in Table 1.1 Published articles make the public network, and ongoing collaborations form the private networks which are only known by partial authors. We also used eight real-world graphs available from SNAP [Leskovec and Krevl, 2014] shown in Table 2.
Dataset Splits No The paper describes how nodes were sampled and bins created for training the classifier ('We first divided all nodes into 100 100 bins... and then randomly took four nodes from each bin'), but it does not specify explicit train/validation/test splits for the main k-truss discovery problem.
Hardware Specification No The paper mentions running experiments on 'SNAP graph datasets' and 'PP-DBLP datasets' but does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for these experiments.
Software Dependencies No The paper states that 'Hybrid-PP adopted a Random Forest' as a classifier but does not provide specific version numbers for any software libraries, programming languages, or other dependencies required to reproduce the experiments.
Experiment Setup Yes We set the parameter k = 7 by default. We also evaluate the methods by varying parameters k in {5, 7, 9, 11, 13, 15}. Hybrid-PP adopted a Random Forest with 51 estimators and a maximum depth of 11 to construct a classifier.