Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Enhanced Graph Similarity Learning via Adaptive Multi-scale Feature Fusion

Authors: Cuifang Zou, Guangquan Lu, Wenzhen Zhang, Xuxia Zeng, Shilong Lin, Longqing Du, Shichao Zhang

IJCAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark datasets including AIDS700nef, LINUX, IMDBMulti, and PTC show that AMFF significantly outperforms existing methods on several metrics. The paper also includes a dedicated 'Experiment' section (Section 5) which details datasets, baselines, evaluation metrics, effectiveness, efficiency, and ablation studies, presenting results in tables and figures.
Researcher Affiliation Academia All authors are affiliated with Guangxi Normal University, which is an academic institution. The email domains used are primarily '@stu.gxnu.edu.cn' and '@mailbox.gxnu.edu.cn', indicating academic affiliations. While one email is '@outlook.com', the associated institution is still academic.
Pseudocode Yes The paper contains a clearly labeled algorithm block titled 'Algorithm 1 The Algorithm of AMFF.' on page 5.
Open Source Code No The paper does not provide an explicit statement or a direct link to open-source code for the methodology described. It does not mention any supplementary materials containing code either.
Open Datasets Yes The paper explicitly mentions and uses well-known benchmark datasets: AIDS700nef, LINUX, IMDBMulti, and PTC, providing brief descriptions for each in Section 5.1. The use of these established datasets implies their public availability.
Dataset Splits No The paper mentions training on graph pairs and calculating MSE loss over a 'training set' (Section 4.4), but it does not specify explicit dataset splits (e.g., percentages for training, validation, and test sets, or references to predefined splits for the benchmark datasets used).
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models, memory, or specific computing environments.
Software Dependencies No The paper does not provide specific version numbers for any ancillary software or libraries used in implementing the described methodology or running the experiments.
Experiment Setup No While the paper describes the model architecture and the loss function used, it does not provide specific hyperparameters such as learning rate, batch size, number of epochs, or optimizer settings for the experiments conducted.