Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation
Authors: Can Qin, Handong Zhao, Lichen Wang, Huan Wang, Yulun Zhang, Yun Fu
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental analysis on the real-world datasets demonstrates the superiority of our approach over the state-of-the-art methods on both accuracy and efficiency. |
| Researcher Affiliation | Collaboration | 1Department of Electrical and Computer Engineering, Northeastern University 2Khoury College of Computer Science, Northeastern University 3Adobe Research |
| Pseudocode | No | The paper describes the model architecture and processes in detail, but it does not include a formal pseudocode block or algorithm listing. |
| Open Source Code | Yes | The code is uploaded on https://github.com/canqin001/Efficient_Graph_Similarity_ Computation |
| Open Datasets | Yes | AIDS (i.e., AIDS700nef) is composed of 700 chemical compound graphs which is split into 560/140 for training and test. Each graph has 10 or less nodes assigned with 29 types of labels. We have used the standard dataloader, i.e., GEDDataset , directly provided in the Py G. |
| Dataset Splits | No | The paper explicitly states training and test splits for the datasets (e.g., '560/140 for training and test' for AIDS), but it does not specify a separate validation set or its size. |
| Hardware Specification | Yes | All experiments are run on the machine with Intel i7-5930K CPU@3.50GHz with 64GB memory. |
| Software Dependencies | No | The paper mentions using 'Py Torch Geometric (Py G)' and 'Adam' as the optimizer. However, specific version numbers for these software dependencies are not provided, which is crucial for reproducibility. |
| Experiment Setup | Yes | To optimize the proposed model, we take the Adam [20] as the optimizer based on Py Torch Geometric (Py G) [31, 10]. The learning rate is assigned as 0.001 with weight decay 0.0005. The batch size is 128, and the model will be trained over 6, 000 epochs. |