General Graph Random Features

Authors: Isaac Reid, Krzysztof Marcin Choromanski, Eli Berger, Adrian Weller

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Sec. 3 we run extensive experiments, including: pointwise estimation of a variety of popular graph kernels (Sec. 3.1); simulation of time evolution under non-homogeneous graph ordinary differential equations (Sec. 3.2); kernelised k-means node clustering including on large graphs (Sec. 3.3); training a neural modulation function to suppress the mean square error of fixed kernel estimates (Sec 3.4); and training a neural modulation function to learn a kernel for node attribute prediction on triangular mesh graphs (Sec. 3.5).
Researcher Affiliation Collaboration Isaac Reid1 , Krzysztof Choromanski2,3 , Eli Berger4 , Adrian Weller1,5 1University of Cambridge, 2Google Deep Mind, 3Columbia University, 4University of Haifa, 5Alan Turing Institute
Pseudocode Yes Algorithm 1 Constructing a random feature vector ϕf(i) RN to approximate Kα(W)
Open Source Code Yes Code is available at https://github.com/isaac-reid/general_graph_random_features. ... Source code is available at https://github.com/isaac-reid/general_graph_random_features.
Open Datasets Yes We consider 8 different graphs: a small random Erdős-Rényi graph, a larger Erdős-Rényi graph, a binary tree, a d-regular graph and 4 real world examples (karate, dolphins, football and eurosis) (Ivashkin, 2023). ... For graphs in this dataset (Dawson Haggerty, 2023)... The datasets we use correspond to standard graphs and are freely available online. We link suitable repositories in every instance.
Dataset Splits No The paper describes training models and evaluating performance but does not provide specific details on train/validation/test dataset splits (percentages, counts, or explicit splitting methodology).
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions software components like 'Adam optimiser' and 'ReLU' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We define our loss function to be the Frobenius norm error between a target Gram matrix and our g-GRF-approximated Gram matrix on the small Erdős-Rényi graph (N = 20) with m = 16 walks. ... f (N)(x) = σsoftplus (w2σRe LU(w1x + b1) + b2) , (16) where w1, b1, w2, b2 R... We minimise the loss with the Adam optimiser and a decaying learning rate (LR = 0.01, γ = 0.975, 1000 epochs).