KONG: Kernels for ordered-neighborhood graphs

Authors: Moez Draief, Konstantin Kutzkov, Kevin Scaman, Milan Vojnovic

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we present our evaluation of the classification accuracy and computation speed of our algorithm and comparison with other kernel-based algorithms using a set of real-world graph datasets.
Researcher Affiliation Collaboration Moez Draief1 Konstantin Kutzkov2 Kevin Scaman1 Milan Vojnovic2 1 Huawei Noah s Ark Lab 2 London School of Economics, London moez.draief@huawei.com, kutzkov@gmail.com (Corresponding author), kevin.scaman@huawei.com, m.vojnovic@lse.ac.uk
Pseudocode Yes Algorithm 1: EXPLICITGRAPHFEATUREMAP. Input: Graph G = (V, E, ℓ, τ), depth h, labeling ℓ: V L, base kernel κ for v V do Traverse the subgraph Tv rooted at v up to depth h Collect the node labels ℓ(u) : u Tv in the order specified by τv into a string Sv Sketch the explicit feature map φκ(Sv) for the base string kernel κ (without storing Sv) Φκ(G) P v V φκ(Sv) return Φκ(G)
Open Source Code Yes Software implementation and data are available at https://github.com/kutzkov/KONG.
Open Datasets Yes We evaluated the algorithms on widely-used benchmark datasets from various domains [Kersting et al., 2016]. MUTAG [Debnath et al., 1991], ENZYMES [Schomburg et al., 2004], PTC [Helma et al., 2001], Proteins [Borgwardt et al., 2005] and NCI1 [Wale and Karypis, 2006] represent molecular structures, and MSRC [Neumann et al., 2016] represents semantic image processing graphs.
Dataset Splits Yes We performed 10-fold cross-validation using 9 folds for training and 1 fold for testing.
Hardware Specification Yes All algorithms were implemented in Python 3 and experiments performed on a Windows 10 laptop with an Intel i7 2.9 GHz CPU and 16 GB main memory.
Software Dependencies No The paper mentions 'Python 3', 'scikit-learn implementation', and 'LIBLINEAR algorithm' but only provides a specific version for Python (Python 3) and not for the key libraries used. It does not list multiple key software components with their specific version numbers.
Experiment Setup Yes Similar to previous works [Niepert et al., 2016, Yanardag and Vishwanathan, 2015], we choose the optimal number of hops h = 2 for the WL kernel and k {5, 6} for the k-walk kernel. We performed 10-fold cross-validation using 9 folds for training and 1 fold for testing. The optimal regularization parameter C for each dataset was selected from {0.1, 1, 10, 100}.