Discrete Network Embedding

Authors: Xiaobo Shen, Shirui Pan, Weiwei Liu, Yew-Soon Ong, Quan-Sen Sun

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on node classification consistently demonstrate that DNE exhibits lower storage and computational complexity than state-ofthe-art network embedding methods, while obtains competitive classification results.
Researcher Affiliation Academia School of Computer Science and Engineering, Nanyang Technological University Centre for Artificial Intelligence, FEIT, University of Technology Sydney School of Computer Science and Engineering, The University of New South Wales School of Computer and Engineering, Nanjing University of Science and Technology
Pseudocode Yes Algorithm 1 Discrete Network Embedding
Open Source Code No The paper does not provide any explicit statement or link regarding the public availability of its source code.
Open Datasets Yes DBLP1: is a citation network in computer science. ... 1http://arnetminer.org/citation YOUTUBE2: is a social network of Youtube users. ... FLICKR2: is a social network of Flickr users. ... 2http://socialnetworks.mpi-sws.org/data-imc2007.html
Dataset Splits No The paper states, 'We randomly sample a portion of the labeled nodes for training classifier and the rest nodes are used for testing. The training ratio increases from 10% to 90% for the three datasets.' It only specifies training and testing splits, with no explicit mention of a separate validation split or its proportion.
Hardware Specification Yes All the computations reported in this study are performed on a Ubuntu 64-Bit Linux workstation with 24-core Intel Xeon CPU E5-2620 2.10 GHz and 128 GB memory.
Software Dependencies No The paper mentions 'multi-class SVM by Crammer and Singer [Crammer and Singer, 2001] is employed as the classifier, which is implemented by LIBLINEAR package [Fan et al., 2008]' but does not provide specific version numbers for LIBLINEAR or any other software dependencies.
Experiment Setup Yes In this work, we set the dimension of network embedding as 128 for all the methods for fair comparison. For Deep Walk, window size, walk length, and walks per node are set as 10, 40, 40, respectively. For LINE, the number of negative samples is set to 5. For Node2Vec, window size, walk length and walks per node are set the same with Deep Walk, and return parameter p and in-out parameter q are set as 1 and 2, respectively. For the proposed DNE, µ and ρ are set as 0.01, 0.01, and 0.5, respectively; τ and λ are selected from the range of [1, 10] and [0, 1] by cross validation respectively. For node classification, the representations for the nodes are first obtained from the network embedding methods and then used as features to train a classifier. We randomly sample a portion of the labeled nodes for training classifier and the rest nodes are used for testing. The training ratio increases from 10% to 90% for the three datasets.