OMNI-Prop: Seamless Node Classification on Arbitrary Label Correlation

Authors: Yuto Yamaguchi, Christos Faloutsos, Hiroyuki Kitagawa

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on four real, different network datasets demonstrate the benefits of the proposed algorithm, where OMNI-Prop outperforms the top competitors.
Researcher Affiliation Academia University of Tsukuba; Carnegie Mellon University yuto_ymgc@kde.cs.tsukuba.ac.jp, christos@cs.cmu.edu, kitagawa@cs.tsukuba.ac.jp
Pseudocode Yes Algorithm 1 Iterative Algorithm
Open Source Code Yes Our code is also made available on the web3.
Open Datasets Yes Datasets. Five network datasets2 used in our experiments are described in Table 3. POLBLOGS (Adamic and Glance 2005) ... COAUTHOR (Sun et al. 2009) ... FACEBOOK (Leskovec and Mcauley 2012) ... POKEC-G (Takac and Zabovsky 2012) ... POKEC-L (Takac and Zabovsky 2012) ... The datasets we use in this experiments are all available on the web.
Dataset Splits No In our experiments, we hide labels of 70% of labeled nodes on each network. Then we perform node classification algorithm to infer hidden labels.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers used for running the experiments.
Experiment Setup Yes although one can use arbitrary values (e.g., the class mass ratio) for priors, we adopt the uniform prior (i.e., bk = 1/K) in this paper... As discussed in our experiments later, we can always use λ = 1.0, meaning that no parameter tuning is needed... According to the results, we can say that OMNI-Prop almost converges after 10 iterations on all networks... We determine the value of α as 0.001 for POLBLOGS, COAUTHOR, FACEBOOK, and POKEC-L, while 0.001 for POKEC-G.