Learning-Based Efficient Graph Similarity Computation via Multi-Scale Convolutional Set Matching

Authors: Yunsheng Bai, Hao Ding, Ken Gu, Yizhou Sun, Wei Wang3219-3226

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The model, GRAPHSIM, achieves the state-of-the-art performance on four realworld graph datasets under six out of eight settings (here we count a specific dataset and metric combination as one setting), compared to existing popular methods for approximate Graph Edit Distance (GED) and Maximum Common Subgraph (MCS) computation.
Researcher Affiliation Collaboration Yunsheng Bai, 1 Hao Ding, 2 Ken Gu,1 Yizhou Sun,1 Wei Wang1 1University of California, Los Angeles, 2AWS AI Labs yba@ucla.edu, haodin@amazon.com, ken.qgu@gmail.com, {yzsun, weiwang}@cs.ucla.edu
Pseudocode No The paper describes the architecture and sequential stages of GRAPHSIM in Section 4, but it does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes Datasets To probe the ability of GRAPHSIM to compute graph-graph similarities from graphs in different domains, we evaluate on four real graph datasets, AIDS, LINUX, IMDB, and PTC, whose detailed descriptions and statistics can be found in the supplementary material.
Dataset Splits Yes For each dataset, we split it into training, validation, and testing sets by 6:2:2, and report the averaged Mean Squared Error (mse), Spearman s Rank Correlation Coefficient (ρ) (Spearman 1904), Kendall s Rank Correlation Coefficient (τ) (Kendall 1938), and Precision at k (p@k) to test the accuracy and ranking performance of each GED and MCS computation method.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or processor types) used for running the experiments. It mentions 'additional benefits from parallelizability and acceleration provided by GPU' but without specific models.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup No The paper states, 'The supplementary material contains more details on the data preprocessing, parameter settings, result analysis, efficiency comparison, as well as parameter sensitivity study.' This indicates that detailed experimental setup information, such as specific hyperparameter values, is not provided in the main text.