From One to All: Learning to Match Heterogeneous and Partially Overlapped Graphs

Authors: Weijie Liu, Hui Qian, Chao Zhang, Jiahao Xie, Zebang Shen, Nenggan Zheng4109-4119

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that the proposed method outperforms state-of-the-art graph matching methods.
Researcher Affiliation Collaboration 1 Qiushi Academy for Advanced Studies, Zhejiang University 2 College of Computer Science and Technology, Zhejiang University 3 Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies 4 University of Pennsylvania 5 Zhejiang Lab 6State Key Lab of CAD&CG, Zhejiang University
Pseudocode Yes Algorithm 1: FOTA
Open Source Code No The paper states the source code is written in Python 3.6 and C++, but it does not provide any information about its public availability or a link to a repository.
Open Datasets Yes We verify the efficacy of FOTA on graphs extracted from three homogeneous graphs, including Arenas Email3, PPI Yeast4, and Arxiv5. We then compare the performance of FOTA against baselines on heterogeneous graphs, including Movie 6, Pub Med 7 and DBLP 8. Footnotes provide URLs: 3http://konect.cc/networks/arenas-email/ 4https://www3.nd.edu/ cone/MAGNA++/ 5http://snap.stanford.edu/data/ca-AstroPh.html 6https://github.com/eXascaleInfolab/JUST/tree/master/Datasets/Movies 7https://pubmed.ncbi.nlm.nih.gov/ 8https://dblp.uni-trier.de/
Dataset Splits No The paper does not specify the exact training, validation, and test splits used for the datasets.
Hardware Specification Yes The experiments are conducted on a Ubuntu 18.04 server with a 24-core 2.70GHz Intel Xeon Platinum 8163 CPU, an NVIDIA Tesla V100 GPU, and 92 GB RAM.
Software Dependencies No The paper mentions that the source code is written in 'Python 3.6 and C++', but it does not provide specific version numbers for any libraries or other software dependencies.
Experiment Setup Yes In all experiments, the embedding dimension is set as d = 64. Setting 1 10 7 α 1 10 4 for FOTA yields improved performance over FOTA-GW. The results in Tables 3, 4 and 5 are obtained with α = 1 10 5. We tested β in {1, 0.1, 0.01, 1 10 3, 1 10 4, 1 10 5}. 1 10 3 β 1 achieves stable performance and thus we set β = 0.01.