Semi-supervisedly Co-embedding Attributed Networks

Authors: Zaiqiao Meng, Shangsong Liang, Jinyuan Fang, Teng Xiao

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on real-world networks demonstrate that our model yields excellent performance in a number of applications such as attribute inference, user profiling and node classification compared to the state-of-the-art baselines. We perform extensive experiments on real-world attributed networks to verify the effectiveness of our embedding model in terms of three network mining tasks, and the results demonstrate that our model is able to significantly outperforms state-of-the-art methods.
Researcher Affiliation Academia Zaiqiao Meng Department of Computing Science Sun Yat-sen University, and University of Glasgow. Shangsong Liang School of Data and Computer Science Sun Yat-sen University. Jinyuan Fang School of Data and Computer Science Sun Yat-sen University. Teng Xiao School of Data and Computer Science Sun Yat-sen University.
Pseudocode No The paper describes the model (SCAN) and its optimization, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code of our SCAN is publicly available from: https://github.com/mengzaiqiao/SCAN.
Open Datasets Yes All experiments of this paper are conducted base on three real-world attributed networks, i.e. Pubmed [12], Blog Catalog [36] and Flickr [36].
Dataset Splits Yes we randomly divide all edges into three sets, i.e., the training (85%), validating (5%) and testing (10%) sets
Hardware Specification No The paper does not provide specific hardware details (such as exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software components like "deep neural networks", "SVM classifier", and "t-SNE tool", but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes we introduce an adjustable hyper-parameter β that balances reconstruction accuracy between edges and attributes. In addition, similar to [19], we wish the parameters of our predictive distribution, i.e. qφc(Yv |φ(Fn v)), can also be trained within the labelled nodes based on their feature Fn v; therefore, we add a classification loss to Eq. 8 and introduce a hyper-parameter to govern the relative weight between generative and purely discriminative models, which results in the following loss... and is the softmax temperature which is set to be 0.2 in our experiments. randomly select 10% of nodes as the labelled nodes... We repeatedly this process 10 times and we randomly divide all edges into three sets, i.e., the training (85%), validating (5%) and testing (10%) sets