Efficient Graph Similarity Computation with Alignment Regularization

Authors: Wei Zhuo, Guang Tan

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets demonstrate the effectiveness, efficiency and transferability of our approach.
Researcher Affiliation Academia Wei Zhuo Shenzhen Campus of Sun Yat-sen University zhuow5@mail2.sysu.edu.cn Guang Tan Shenzhen Campus of Sun Yat-sen University tanguang@mail.sysu.edu.cn
Pseudocode No The paper describes methods using text and equations but does not contain a structured pseudocode or algorithm block.
Open Source Code Yes 3.a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See Supplemental Material
Open Datasets Yes We conduct experiments on four widely used GSC datasets including AIDS700, LINUX, IMDB [1], and NCI109 [2].
Dataset Splits Yes Following the same splits as [1 3], i.e., 60%, 20%, and 20% of all graphs as training set, validation set, and query set, respectively.
Hardware Specification Yes all experiments are implemented with a single machine with 1 NVIDIA Quadro RTX 8000 GPU.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the experiments.
Experiment Setup Yes T is a hyper-parameter controlling the output dimension, which is assigned as 16 for all datasets in our settings. For simplicity, we uniformly set p = 2 (i.e., ℓ2 distance) for all datasets, and analyze the sensitivity of the hyper-parameter p in Section 5.4. Combining AReg and GED discriminator, the training stage aims to minimize the following overall objective function L = LGED + λLAReg, where λ is an adjustable hyper-parameter controlling the strength of the regularization term. We give more implementation details of ERIC and baselines in Appendix B.2.