SNEQ: Semi-Supervised Attributed Network Embedding with Attention-Based Quantisation

Authors: Tao He, Lianli Gao, Jingkuan Song, Xin Wang, Kejie Huang, Yuanfang Li4091-4098

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our evaluation on four real-world networks of diverse characteristics shows that SNEQ outperforms a number of state-of-the-art embedding methods in link prediction, node classification and node recommendation.
Researcher Affiliation Academia 1Faculty of Information Technology, Monash University, Clayton, Victoria 3800 2Center for Future Media, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731 3College of Intelligence and Computing, Tianjin University, Jinnan, Tianjin 300350 4College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, Zhejiang 310007
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is released at https://github.com/htlsn/SNEQ.
Open Datasets Yes We evaluate our method on four real-world networks. Brief statistics of the datasets are shown in Table 1. It is worthing noting that c DBLP (Yang and Leskovec 2015) is a large-scale unattributed network.
Dataset Splits Yes Specifically, we randomly select 5% and 10% edges as the validation and test set respectively, similar to Graph2Gauss (Bojchevski and G unnemann 2018).
Hardware Specification Yes All experiments were performed on a workstation with 256 GB memory, 32 Intel(R) Xeon(R) CPUs (E5-2620 v4 @ 2.10GHz) and 8 Ge Force GTX 1080Ti GPUs.
Software Dependencies No The paper mentions software components implicitly (e.g., "Our encoder consists of two dense layers"), but it does not specify version numbers for any libraries, frameworks, or programming languages (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes Our encoder consists of two dense layers of 512 units and 128 units respectively. The dimension of the network embeddings is set to 128. The learning rate η is set as 0.001, and batch size is 100. Hyperparameters α and β are not fixed but tuned in an unsupervised way similar to (Xie et al. 2018). Specifically, α = 0.1 1+e (ωμ) , where ω is a constant value 0.5, and μ is the training progress from 0 to 1, while β is set as 1.0 1 1+e (ωμ) . For quantisation, we set M = 16 and K = 8 by default, but also test the impact of different values of M and K, which can be seen in experiments below. The amount of labelled nodes T used in semisupervised training (as defined in Eq. 3) is set to 10% of |V | by default.