Convolutional Kernel Networks for Graph-Structured Data
Authors: Dexiong Chen, Laurent Jacob, Julien Mairal
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate GCKN and compare its variants to state-of-the-art methods, including GNNs and graph kernels, on several real-world graph classification datasets, involving either discrete or continuous attributes. |
| Researcher Affiliation | Academia | 1Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France 2Univ. Lyon, Universit e Lyon 1, CNRS, Laboratoire de Biom etrie et Biologie Evolutive UMR 5558, 69000 Lyon, France. |
| Pseudocode | Yes | Algorithm 1 Forward pass for multilayer GCKN |
| Open Source Code | Yes | Our code is freely available at https://github.com/claying/GCKN. |
| Open Datasets | Yes | We use the same benchmark datasets as in Du et al. (2019), including 4 biochemical datasets MUTAG, PROTEINS, PTC and NCI1 and 3 social network datasets IMDB-B, IMDB-MULTI and COLLAB. [...] All datasets and size information about the graphs can be found in Kersting et al. (2016). http://graphkernels.cs.tu-dortmund.de. |
| Dataset Splits | Yes | We follow the same protocols as (Du et al., 2019; Xu et al., 2019), and report the average accuracy and standard deviation over a 10-fold cross validation on each dataset. We use the same data splits as Xu et al. (2019), using their code. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using "the SVM implementation of the Cyanure toolbox (Mairal, 2019)" and "an Adam optimizer (Kingma & Ba, 2015)", but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | We use an Adam optimizer (Kingma & Ba, 2015) with the initial learning rate equal to 0.01 and halved every 50 epochs, and fix the batch size to 32. [...] We tune the bandwidth of the Gaussian kernel (identical for all layers), pooling operation (local (13) or global (14)), path size k1 at the first layer, number of filters (identical for all layers) and regularization parameter λ in (11). |