Iteratively Refined Early Interaction Alignment for Subgraph Matching based Graph Retrieval
Authors: Ashwin Ramachandran, Vaibhav Raj, Indradyumna Roy, Soumen Chakrabarti, Abir De
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on several datasets show that the alignments get progressively refined with successive rounds, resulting in significantly better retrieval performance than existing methods. We demonstrate that all three innovations contribute to the enhanced accuracy. |
| Researcher Affiliation | Academia | Ashwin Ramachandran1 Vaibhav Raj2 Indrayumna Roy2 Soumen Chakrabarti2 Abir De2 1UC San Diego 2IIT Bombay ashwinramg@ucsd.edu {vaibhavraj, indraroy15, soumen, abir}@cse.iitb.ac.in |
| Pseudocode | No | The paper describes algorithms through text and equations (e.g., Equations 7, 8, 12, 13) but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and datasets are publicly available at https://github.com/structlearning/isonetpp. |
| Open Datasets | Yes | We use six real world datasets in our experiments, viz., AIDS, Mutag, PTC-FM (FM), PTC-FR (FR), PTC-MM (MM) and PTC-MR (MR), which were also used in [27, 35]. |
| Dataset Splits | Yes | Given a fixed corpus set C, we split the query set Q into 60% training, 15% validation and 25% test set. |
| Hardware Specification | Yes | Iso Net++ (Node), Iso Net++ (Edge), GMN, Iso Net (Edge) and ablations on top of these were trained on Nvidia RTX A6000 (48 GB) GPUs while other baselines like Graph Sim, GOTSim etc. were trained on Nvidia A100 (80 GB) GPUs. |
| Software Dependencies | Yes | All experiments were run with Python 3.10.13 and Py Torch 2.1.2. |
| Experiment Setup | Yes | All models were trained using early stopping with MAP score on the validation split as a stopping criterion. For early stopping, we used a patience of 50 with a tolerance of 10^-4. We used the Adam optimizer with the learning rate as 10^-3 and the weight decay parameter as 5 x 10^-4. We set batch size to 128 and maximum number of epochs to 1000. |