A Simple yet Effective Method for Graph Classification

Authors: Junran Wu, Shangzhe Li, Jianhao Li, Yicheng Pan, Ke Xu

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically validate our methods with several graph classification benchmarks and demonstrate that they achieve better performance and lower computational consumption than competing approaches.
Researcher Affiliation Academia 1State Key Lab of Software Development Environment, Beihang University, Beijing, 100191, China 2School of Mathematical Science, Beihang University, Beijing 100191, China
Pseudocode Yes Algorithm 1 k-dimensional coding tree on structural entropy
Open Source Code Yes The code of the WL-CT kernel and HRN can be found at https: //github.com/Wu-Junran/Hierarchical Reporting.
Open Datasets Yes Datasets. We conduct graph classification on five benchmarks: three social network datasets (IMDB-BINARY, IMDB-MULTI, and COLLAB) and two bioinformatics datasets (MUTAG and PTC) [Morris et al., 2020]
Dataset Splits Yes Configurations. Following [Xu et al., 2019], 10-fold crossvalidation is conducted, and we present the average accuracies achieved to validate the performance of our methods in graph classification
Hardware Specification No The paper does not provide specific details about the hardware used, such as GPU or CPU models. It only mentions computational efficiency in terms of FLOPs.
Software Dependencies No The paper mentions software like C-support vector machine (C-SVM), Scikit-learn, and Adam optimizer, but does not specify their version numbers.
Experiment Setup Yes Regarding the configuration of our tree kernel, we adopt the C-support vector machine (C-SVM) [Chang and Lin, 2011] as the classifier and tune the hyperparameter C of the SVM and the height of the coding tree [2, 3, 4, 5]. ... For configuration of HRN, the number of HRN iterations is consistent with the height of the associated coding trees, which is also [2, 3, 4, 5]. All MLPs have 2 layers... We utilize the Adam optimizer and set its initial learning rate to 0.01. For a better fit, the learning rate decays by half every 50 epochs. Other tuned hyperparameters for HRN include the number of hidden dimensions {16, 32, 64}, the minibatch size {32, 128}, and the dropout ratio {0, 0.5} after LAYERPOOL.