D2Match: Leveraging Deep Learning and Degeneracy for Subgraph Matching

Authors: Xuanzhou Liu, Lin Zhang, Jiaqi Sun, Yujiu Yang, Haiqin Yang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conduct extensive experiments to show the superior performance of our D2Match and confirm that our D2Match indeed exploits the subtrees and differs from existing GNNs-based subgraph matching methods that depend on memorizing the data distribution divergence.
Researcher Affiliation Collaboration 1Shenzhen International Graduate School, Tsinghua University, Shenzhen, China 2International Digital Economy Academy (IDEA). Work done when Xuanzhou was interned at IDEA. Correspondence to: Yujiu Yang <yang.yujiu@sz.tsinghua.edu.cn>, Haiqin Yang <hqyang@ieee.org>.
Pseudocode Yes The pseudo-code of D2Match is outlined as follows: Algorithm 1 The D2Match algorithm
Open Source Code No The python implementation of D2Match will be available at https://github.com/Xuanzhou Liu/D2Match-ICML23
Open Datasets Yes We first generate synthetic data by utilizing ER-random graphs and WS-random graphs (Rex et al., 2020). [...] For the real-world data, we follow the setting in (Rex et al., 2020), including Cox2, Enzymes, Proteins, IMDB-Binary, MUTAG, Aids, and First MMDB.
Dataset Splits Yes We split each dataset into training and testing at a ratio of 4 : 1 and report the average classification accuracy under the five-fold cross-validation.
Hardware Specification No The paper does not provide specific details on the hardware used for running experiments, such as CPU or GPU models, or memory specifications.
Software Dependencies No The paper mentions using Adam as an optimizer but does not specify software dependencies like programming languages, libraries, or frameworks with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes We set all models with adjustable number of layers to 5 layers, and set the hidden dimension to 10 to avoid overfitting. [...] Both our model and all baselines use the Adam as optimizer and set the learning rate to 3e-4. [...] We test the effect of the depth of a subtree, i.e., the number of the hidden layers, and change it from 1 to 7. [...] We vary K from 1 to 7 and show the results in Fig. 2(b).