Interpretable Neural Subgraph Matching for Graph Retrieval

Authors: Indradyumna Roy, Venkata Sai Baba Reddy Velugoti, Soumen Chakrabarti, Abir De8115-8123

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on diverse datasets show that ISONET outperforms recent graph retrieval formulations and systems.
Researcher Affiliation Academia Indradyumna Roy, Venkata Sai Baba Reddy Velugoti, Soumen Chakrabarti, Abir De Indian Institute of Technology Bombay {indraroy15, abir, soumen}@cse.iitb.ac.in, saibaba.rapur@gmail.com
Pseudocode No No pseudocode or clearly labeled algorithm block was found.
Open Source Code Yes Our code is available at https://github.com/Indradyumna/ISONET.
Open Datasets Yes We experiment with six real world datasets: PTC-FR, PTC-FM, PTC-MM, PTC-MR, MUTAG and AIDS (Morris et al. 2020).
Dataset Splits Yes Given a set of query graphs Q and a set of corpus graphs C, we split Q into 60% training, 15% validation and 25% test folds.
Hardware Specification No No specific hardware details (like GPU/CPU models, memory, or specific cloud instances) used for running experiments were mentioned in the paper.
Software Dependencies No The paper describes the neural network components but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup No The paper mentions some hyperparameter concepts like 'margin hyperparameter γ > 0', 'temperature τ > 0', and the use of MLPs and GRUs, but does not provide concrete numerical values for these parameters or detailed training configurations (e.g., specific learning rates, batch sizes, number of epochs).