Neural Gaussian Similarity Modeling for Differential Graph Structure Learning

Authors: Xiaolong Fan, Maoguo Gong, Yue Wu, Zedong Tang, Jieyi Liu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate the effectiveness of the proposed methods. and Extensive experiments on graph and graph-enhanced application datasets demonstrate the superior effectiveness of the proposed method.
Researcher Affiliation Academia 1 School of Electronic Engineering, Key Laboratory of Collaborative Intelligence Systems of Ministry of Education, Xidian University, Xi an, China 2 School of Computer Science and Technology, Key Laboratory of Collaborative Intelligence Systems of Ministry of Education, Xidian University, Xi an, China 3 Academy of Advanced Interdisciplinary Research, Key Laboratory of Collaborative Intelligence Systems of Ministry of Education, Xidian University, Xi an, China
Pseudocode No The paper describes methods using mathematical equations and textual explanations, but it does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about open-sourcing the code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes To evaluate the developed methods, we use eight commonly used graph datasets, including two citation network datasets, i.e., Cite Seer and Pub Med (Yang, Cohen, and Salakhudinov 2016), two Wikipedia network datasets, i.e., Chameleon and squirrel (Pei et al. 2020), two coauthor network datasets, i.e., CS and Physics (Shchur et al. 2018), and two graph-enhanced application datasets, i.e., 20News and Mini-Image Net (Mini) (Wu et al. 2022).
Dataset Splits Yes For all datasets, we randomly sampled 50% of nodes for training set, 25% for validation set, and 25% for test set.
Hardware Specification No The paper mentions 'GPU Memory (MB)' in Table 4 but does not provide specific details such as GPU models, CPU models, or any other hardware specifications used for running the experiments.
Software Dependencies No The paper mentions using 'Adam optimizer' and 'GNNs' but does not specify any software dependencies with version numbers (e.g., programming language versions or library versions).
Experiment Setup Yes The Adam optimizer are used with the learning rate 0.01 for Cite Seer dataset and 0.001 for other datasets. We set the hidden dimension to be 256 for Mini Image Net dataset, 32 for other datasets, and the number of transition-graph nodes to 500. The weight parameters are initialized using Glorot initialization and the bias parameters using zero initialization. For parameters c of Neural Gau Sim, we multiply c by a scale factor of 0.1. We add dropout layers with probabilities of 0.5 after the first layer of the GNNs, and train two-layer GNNs.