Kriging Convolutional Networks
Authors: Gabriel Appleby, Linfeng Liu, Li-Ping Liu3187-3194
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we show that this model outperforms GNNs and kriging in several applications. and empirical studies indicating the KCN s advantage over baseline models. |
| Researcher Affiliation | Academia | Gabriel Appleby, Linfeng Liu, Li-Ping Liu Department of Computer Science, Tufts University {Gabriel.Appleby, Linfeng.Liu, Liping.Liu}@tufts.edu |
| Pseudocode | Yes | Algorithm 1: The training algorithm of KCN |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the KCN methodology developed in this paper is openly available. |
| Open Datasets | No | The paper mentions 'bird count data from the e Bird project (Fink et al. 2010)', 'Yelp ... restaurant ratings', and 'The National Oceanic and Atmospheric Administration keeps detailed records of precipitation levels across the United States.' but does not provide specific links, DOIs, repositories, or direct citations for accessing these datasets. |
| Dataset Splits | No | The paper states 'We split the dataset into a training set and a test set by 1:1' for all experiments. It does not explicitly provide percentages or counts for a separate validation split. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions using libraries and implementations such as Automap, an implementation of Random Forest, Kipf's GCN implementation, and the Spektral graph deep learning library, but it does not provide specific version numbers for any of these software dependencies. |
| Experiment Setup | Yes | We tune the following hyperparameters: hidden sizes ((20, 10), (10, 5), (5, 3)), dropout rate (0, 0.25, 0.5), and kernel length (1, .5, .1, .05). We also employed early stopping to decide the number epochs. |