Graph Edit Distance Learning via Modeling Optimum Matchings with Constraints
Authors: Yun Peng, Byron Choi, Jianliang Xu
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that our method is 4.2x-103.8x more accurate than the state-of-the-art methods on real-world benchmark graphs. |
| Researcher Affiliation | Academia | Yun Peng , Byron Choi , Jianliang Xu Hong Kong Baptist University {yunpeng, bchoi, xujl}@comp.hkbu.edu.hk |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code is available online.1 1https://github.com/csypeng/graph edit distance learning |
| Open Datasets | Yes | We use four real graph datasets AIDS, IMDB, LINUX and PTC that are from different domains in our experiments. The datasets are the same as those used in [Bai et al., 2020]. |
| Dataset Splits | Yes | To answer Q1 with this experiment, we sample 6k, 2k, 2k pairs of graphs in G 30 as the training data, the validation data and the test data, respectively. |
| Hardware Specification | Yes | The experiments are conducted using Py Torch on a server with Intel CPU Xeon Gold 6230R, 768G RAM, and a GPU card NVIDIA Tesla K80. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'Adam optimizer' but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | We use two graph convolution layers and Re LU as the activation function. We use the one-hot encoding of node degree as the initial node embedding. The embedding dimensions are 32. ... We set the batch size to 1 and use the Adam optimizer. The initial learning rate is 0.005 and reduced by 0.96 for each 5 epochs. We set the number of epochs to 600, and select the best model based on the lowest MSE of GED on validation data. |