GraLSP: Graph Neural Networks with Local Structural Patterns

Authors: Yilun Jin, Guojie Song, Chuan Shi4361-4368

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we introduce our experimental evaluations on our model Gra LSP.
Researcher Affiliation Academia Yilun Jin,1 Guojie Song,2 Chuan Shi3 1The Hong Kong University of Science and Technology, Hong Kong SAR, China 2Key Laboratory of Machine Perception, Ministry of Education, Peking University, China 3Beijing University of Posts and Telecommunications, China
Pseudocode No The paper describes its methods through equations and textual explanations, but it does not include a structured pseudocode block or a clearly labeled algorithm.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes We use the following datasets for the experiments... Cora and Citeseer are citation datasets used in GCN (Kipf (2016))... AMiner is used in DANE (Zhang et al. 2019)... US-Airport is the dataset used in struc2vec (2017)...
Dataset Splits Yes We take 20% of all nodes as the test set and 80% as training.
Hardware Specification No The paper describes experimental settings and parameters but does not provide specific hardware details such as GPU/CPU models or memory specifications used for running experiments.
Software Dependencies No We adopt the Adam Optimizer to optimize the objective using Tensor Flow.
Experiment Setup Yes As for parameter settings, we take 32-dimensional embeddings for all methods, and adopt Adam optimizer with learning rate 0.005. For GNNs, we take 2-layer networks with a hidden layer sized 100. For models involving skipgram optimization including Deep Walk, Graph SAGE and Gra LSP, we take γ = 100, l = 8, window size as 5 and the number of negative sampling as 8. For models involving neighborhood sampling, we take the number for sampling as 20. In addition, we take μ = 0.1, and d = 30 for Gra LSP, and keep the other parameters for the baselines as default.