Rethinking Kernel Methods for Node Representation Learning on Graphs
Authors: Yu Tian, Long Zhao, Xi Peng, Dimitris Metaxas
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically evaluate our approach on a number of standard node classification benchmarks, and demonstrate that our model sets the new state of the art. |
| Researcher Affiliation | Academia | Yu Tian Rutgers University yt219@cs.rutgers.edu Long Zhao Rutgers University lz311@cs.rutgers.edu Xi Peng University of Delaware xipeng@udel.edu Dimitris N. Metaxas Rutgers University dnm@cs.rutgers.edu |
| Pseudocode | No | The paper describes the proposed method using mathematical formulations and descriptive text, but it does not include a formal pseudocode block or algorithm. |
| Open Source Code | Yes | The source code is publicly available at https://github.com/bluer555/Kernel GCN. |
| Open Datasets | Yes | We evaluate the proposed kernel-based approach on three benchmark datasets: Cora [22], Citeseer [11] and Pubmed [30]. |
| Dataset Splits | Yes | The statistics of the datasets together with their data split settings (i.e., the number of samples contained in the training, validation and testing sets, respectively) are summarized in Table 1. |
| Hardware Specification | No | The paper mentions time and space savings on the Cora dataset but does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions that source code is available and refers to supplementary material for implementation details, but it does not list specific software dependencies with version numbers in the main text. |
| Experiment Setup | Yes | All these variants are implemented by MLPs with two layers." and "In Table 4, we implement three variants of K3 (2-hop and 2-layer with ωh by default) to evaluate the proposed node feature aggregation schema. ... We show results with different layers ranging from 1 to 3. |