Community Preserving Network Embedding

Authors: Xiao Wang, Peng Cui, Jing Wang, Jian Pei, Wenwu Zhu, Shiqiang Yang

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on a variety of real-world networks show the superior performance of the proposed method over the state-of-the-arts.
Researcher Affiliation Academia Xiao Wang,1 Peng Cui,1 Jing Wang,2 Jian Pei,3 Wenwu Zhu,1 Shiqiang Yang1 1Department of Computer Science and Technology, Tsinghua University, China 2Faculty of Science and Technology, Bournemouth University, UK 3School of Computing Science, Simon Fraser University, Canada
Pseudocode No The paper provides mathematical updating rules but does not include a clearly labeled "Pseudocode" or "Algorithm" block.
Open Source Code No The paper does not contain an explicit statement about releasing the source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes The Web KB network3 consists of 4 subnetworks with 877 webpages and 1608 edges. The subnetworks were gathered from 4 universities, i.e., Cornell, Texas, Washington and Wisconsin. Each subnetwork is divided into 5 communities. Political blog network (Polblogs)4 (Adamic and Glance 2005) (1222 nodes, 16715 edges)... Facebook networks (Traud, Mucha, and Porter 2012)...
Dataset Splits No For each class of a given network, we randomly selected 80% nodes as the training nodes and the rest as the testing nodes. This describes training and testing splits, but no explicit validation split.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No We used the LIBLINEAR package (Fan et al. 2008) to train the classifiers. While a package is named, a specific version number is not provided.
Experiment Setup Yes We uniformly set the representation dimension m = 100. For the M-NMF, we set α and β {0.1, 0.5, 1, 5, 10}. We repeated the clustering 20 times, each with a new set of initial centroid, the average results were reported here, shown in Table 1. For each class of a given network, we randomly selected 80% nodes as the training nodes and the rest as the testing nodes. We repeated the process 5 times and reported the average accuracy, shown in Table 2.