Taming graph kernels with random features

Authors: Krzysztof Marcin Choromanski

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically verify the quality of GRFs via various experiments, including downstream applications of graph kernels.
Researcher Affiliation Collaboration 1Google Deep Mind 2Columbia University. Correspondence to: Krzysztof Choromanski <kchoro@google.com>.
Pseudocode Yes Algorithm 1 Computing a signature vector for a given i.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the methodology described, nor does it provide a repository link or mention code in supplementary materials.
Open Datasets Yes The real-world graphs were accessed from the repositories described in (Ivashkin, 2023). Ivashkin, V. Community graphs repository, 2023. URL https://github.com/vlivashkin/community-graphs.
Dataset Splits No The paper uses various graphs for experiments but does not provide specific details on dataset splits (e.g., train/validation/test percentages or counts), nor does it mention cross-validation or other data partitioning methodologies.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with their version numbers required to replicate the experiments.
Experiment Setup Yes We took pterm = 0.1 since it worked well in several other tests (see: Sec. 5.2, Sec. 5.3). We fixed: σ2 = 0.2. The reported empirical relative Frobenium norm errors were obtained by averaging over s = 10 independent experiments. In all the experiments we used σ2 = 0.2, pterm ≈ 1/400 and K ≈ 0.6N. We chose the no of clusters nb clusters = 3.