Graph Random Neural Features for Distance-Preserving Graph Representations

Authors: Daniele Zambon, Cesare Alippi, Lorenzo Livi

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental campaign is divided into two parts. Section 8.1 gives empirical evidence about the claimed convergence as the embedding dimension M grows. Secondly, Section 8.2 shows that our method can be effectively used as a layer of a neural network and achieves results comparable to the current state of the art on classification tasks.
Researcher Affiliation Academia 1Universit a della Svizzera italiana, Lugano, Switzerland 2Politecnico di Milano, Milano, Italy 3University of Manitoba, Winnipeg, Canada 4University of Exeter, Exeter, United Kingdom.
Pseudocode No The paper does not contain a structured pseudocode or algorithm block, nor is there a section explicitly labeled 'Pseudocode' or 'Algorithm'.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes Specifically, we considered NCI1, PROTEINS, ENZYMES, IMDB-BINARY, IMDB-MULTI and COLLAB, all available to the public (Kersting et al., 2016) and commonly used for benchmarking.
Dataset Splits Yes We report accuracy and standard deviation estimated on 10-fold cross-validation, where in each run we consider the optimal hyper-parameter configuration assessed on a validation set.
Hardware Specification No The paper does not provide any specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions PyTorch in the bibliography as a citation but does not specify its version number or any other software library names with their corresponding version numbers required for reproducibility.
Experiment Setup Yes All intermediate layers have the rectified linear unit function x 7 max{0, x} as activation function. We build features with k = 1, 2 tensor orders and embedding dimension M = 512.