Filtration-Enhanced Graph Transformation

Authors: Zijian Chen, Rong-Hua Li, Hongchao Qin, Huanzhong Duan, Yanxiong Lu, Qiangqiang Dai, Guoren Wang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We theoretically and experimentally demonstrate that our solutions exhibit significantly better performance than the state-of-the art solutions for graph classification tasks.
Researcher Affiliation Collaboration Zijian Chen1 , Rong-Hua Li1 , Hongchao Qin1 , Huanzhong Duan2 , Yanxiong Lu2 , Qiangqiang Dai1 and Guoren Wang1 1Beijing Institute of Technology 2Tencent blockchan
Pseudocode No No structured pseudocode or algorithm blocks are present or explicitly labeled in the paper.
Open Source Code Yes Our source code are available at https://github.com/ Block Chan ZJ/Filtration-Enhanced-Graph-Transformation.
Open Datasets Yes We use 7 benchmark attributed graph datasets including 3 datasets with native edge weights (BZR MD, COX2 MD, ER MD) and 4 datasets with continuous vertex attributes (BZR, DHFR, ENZYMES, PROTEINS). All these 7 benchmark datasets are widely used in graph classification studies [Kriege and Mutzel, 2012; O Bray et al., 2021]. ... All the datasets are available at ls11-www.cs.tu-dortmund.de/ staff/morris/graphkerneldataset.
Dataset Splits Yes For graph kernels, we use CSVM as a classifier and perform 10-fold cross-validation. The evaluation process was repeated 10 times for each dataset and each method. For a fair comparison, we follow the standard data splits in [Errica et al., 2020].
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper mentions that 'All graph kernels are implemented in Python using the Gra Ke L library', but does not specify version numbers for Python, GraKeL, or any other software dependencies.
Experiment Setup Yes The parameter C of the SVM is tuned from {10-3, , 103}. The layers of FEG/FES are chosen from {2, , 5} for full FEG/FES, and {2, , 10, 20, 50} for partial FEG/FES. ... All three GNNs are trained for 500 epochs with 50 epoch patience to early stop and hidden units of 64.The convolution layer numbers are selected from {2, 3, 4}. For Graph SAGE and GIN, we set the learning rate parameter as 0.001, batch size as 128, and dropout is chosen from {0, 0.5}. For Graph SNN, we use dropout of 0.5, batch size of 64 and learning rate chosen from {0.01, 0.001}.